I0505 21:07:32.201564 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0505 21:07:32.201874 7 e2e.go:109] Starting e2e run "588cbb0a-04d2-456c-a1b1-86bdc850e3a5" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588712851 - Will randomize all specs Will run 278 of 4842 specs May 5 21:07:32.255: INFO: >>> kubeConfig: /root/.kube/config May 5 21:07:32.260: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 5 21:07:32.290: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 5 21:07:32.322: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 5 21:07:32.322: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 5 21:07:32.322: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 5 21:07:32.330: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 5 21:07:32.330: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 5 21:07:32.330: INFO: e2e test version: v1.17.4 May 5 21:07:32.332: INFO: kube-apiserver version: v1.17.2 May 5 21:07:32.332: INFO: >>> kubeConfig: /root/.kube/config May 5 21:07:32.338: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:07:32.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 5 21:07:32.410: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:07:32.984: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:07:34.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309652, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309653, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309652, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:07:38.069: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:07:38.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1933-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:07:39.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8327" for this suite. STEP: Destroying namespace "webhook-8327-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.051 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":1,"skipped":25,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:07:39.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:07:50.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3555" for this suite. • [SLOW TEST:11.119 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":2,"skipped":43,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:07:50.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:07:51.255: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:07:53.265: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:07:56.363: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:07:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6230" for this suite. STEP: Destroying namespace "webhook-6230-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.174 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":3,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:07:56.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 21:07:56.755: INFO: Waiting up to 5m0s for pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879" in namespace "downward-api-1119" to be "success or failure" May 5 21:07:56.759: INFO: Pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879": Phase="Pending", Reason="", readiness=false. Elapsed: 3.206574ms May 5 21:07:58.763: INFO: Pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007372571s May 5 21:08:00.767: INFO: Pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879": Phase="Running", Reason="", readiness=true. Elapsed: 4.011247274s May 5 21:08:02.771: INFO: Pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015354561s STEP: Saw pod success May 5 21:08:02.771: INFO: Pod "downward-api-63132948-0cb6-4795-8519-9a1e92c13879" satisfied condition "success or failure" May 5 21:08:02.774: INFO: Trying to get logs from node jerma-worker pod downward-api-63132948-0cb6-4795-8519-9a1e92c13879 container dapi-container: STEP: delete the pod May 5 21:08:02.867: INFO: Waiting for pod downward-api-63132948-0cb6-4795-8519-9a1e92c13879 to disappear May 5 21:08:02.869: INFO: Pod downward-api-63132948-0cb6-4795-8519-9a1e92c13879 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:02.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1119" for this suite. • [SLOW TEST:6.195 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":94,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:02.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 21:08:02.930: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9937" for this suite. • [SLOW TEST:7.243 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":5,"skipped":96,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:10.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:08:10.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b" in namespace "downward-api-1153" to be "success or failure" May 5 21:08:10.226: INFO: Pod "downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495264ms May 5 21:08:12.230: INFO: Pod "downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00788307s May 5 21:08:14.235: INFO: Pod "downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012574286s STEP: Saw pod success May 5 21:08:14.235: INFO: Pod "downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b" satisfied condition "success or failure" May 5 21:08:14.238: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b container client-container: STEP: delete the pod May 5 21:08:14.298: INFO: Waiting for pod downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b to disappear May 5 21:08:14.302: INFO: Pod downwardapi-volume-272fb420-7380-42a3-986a-6ecb056a4e5b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:14.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1153" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":103,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:14.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 5 21:08:20.968: INFO: Successfully updated pod "adopt-release-7xttm" STEP: Checking that the Job readopts the Pod May 5 21:08:20.968: INFO: Waiting up to 15m0s for pod "adopt-release-7xttm" in namespace "job-971" to be "adopted" May 5 21:08:20.990: INFO: Pod "adopt-release-7xttm": Phase="Running", Reason="", readiness=true. Elapsed: 21.599825ms May 5 21:08:22.994: INFO: Pod "adopt-release-7xttm": Phase="Running", Reason="", readiness=true. Elapsed: 2.025638355s May 5 21:08:22.994: INFO: Pod "adopt-release-7xttm" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 5 21:08:23.503: INFO: Successfully updated pod "adopt-release-7xttm" STEP: Checking that the Job releases the Pod May 5 21:08:23.503: INFO: Waiting up to 15m0s for pod "adopt-release-7xttm" in namespace "job-971" to be "released" May 5 21:08:23.515: INFO: Pod "adopt-release-7xttm": Phase="Running", Reason="", readiness=true. Elapsed: 11.752793ms May 5 21:08:25.519: INFO: Pod "adopt-release-7xttm": Phase="Running", Reason="", readiness=true. Elapsed: 2.016410307s May 5 21:08:25.519: INFO: Pod "adopt-release-7xttm" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:25.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-971" for this suite. • [SLOW TEST:11.219 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":7,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:25.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:08:25.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2044' May 5 21:08:28.725: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 21:08:28.725: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 5 21:08:28.760: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-64q65] May 5 21:08:28.760: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-64q65" in namespace "kubectl-2044" to be "running and ready" May 5 21:08:28.766: INFO: Pod "e2e-test-httpd-rc-64q65": Phase="Pending", Reason="", readiness=false. Elapsed: 5.547931ms May 5 21:08:30.887: INFO: Pod "e2e-test-httpd-rc-64q65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126574054s May 5 21:08:32.891: INFO: Pod "e2e-test-httpd-rc-64q65": Phase="Running", Reason="", readiness=true. Elapsed: 4.130759487s May 5 21:08:32.891: INFO: Pod "e2e-test-httpd-rc-64q65" satisfied condition "running and ready" May 5 21:08:32.891: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-64q65] May 5 21:08:32.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2044' May 5 21:08:33.037: INFO: stderr: "" May 5 21:08:33.037: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.117. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.117. Set the 'ServerName' directive globally to suppress this message\n[Tue May 05 21:08:32.300497 2020] [mpm_event:notice] [pid 1:tid 139783434349416] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 05 21:08:32.300559 2020] [core:notice] [pid 1:tid 139783434349416] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 5 21:08:33.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2044' May 5 21:08:33.141: INFO: stderr: "" May 5 21:08:33.141: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:33.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2044" for this suite. • [SLOW TEST:7.634 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":8,"skipped":128,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:33.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 21:08:37.404: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:37.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6914" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":130,"failed":0} S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:37.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:08:37.803: INFO: Creating deployment "test-recreate-deployment" May 5 21:08:37.808: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 5 21:08:37.867: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 5 21:08:39.874: INFO: Waiting deployment "test-recreate-deployment" to complete May 5 21:08:39.876: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309717, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309717, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309717, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309717, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:08:41.880: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 5 21:08:41.884: INFO: Updating deployment test-recreate-deployment May 5 21:08:41.884: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 21:08:42.373: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7402 /apis/apps/v1/namespaces/deployment-7402/deployments/test-recreate-deployment 4d37f749-2088-4c94-831b-ea4a5877205c 13671274 2 2020-05-05 21:08:37 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002359948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-05 21:08:42 +0000 UTC,LastTransitionTime:2020-05-05 21:08:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-05 21:08:42 +0000 UTC,LastTransitionTime:2020-05-05 21:08:37 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 5 21:08:42.378: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7402 /apis/apps/v1/namespaces/deployment-7402/replicasets/test-recreate-deployment-5f94c574ff 4d6a86c8-456a-45b9-b7a3-60b5fc94b7e2 13671272 1 2020-05-05 21:08:41 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 4d37f749-2088-4c94-831b-ea4a5877205c 0xc0016fe077 0xc0016fe078}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0016fe148 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 21:08:42.378: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 5 21:08:42.378: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-7402 /apis/apps/v1/namespaces/deployment-7402/replicasets/test-recreate-deployment-799c574856 8059e18c-beb4-4d7e-a894-88c3a1a3cbb5 13671263 2 2020-05-05 21:08:37 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 4d37f749-2088-4c94-831b-ea4a5877205c 0xc0016fe287 0xc0016fe288}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0016fe328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 21:08:42.430: INFO: Pod "test-recreate-deployment-5f94c574ff-pb8tp" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-pb8tp test-recreate-deployment-5f94c574ff- deployment-7402 /api/v1/namespaces/deployment-7402/pods/test-recreate-deployment-5f94c574ff-pb8tp 112244b6-9a1c-4b35-8c45-515168737589 13671276 0 2020-05-05 21:08:42 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4d6a86c8-456a-45b9-b7a3-60b5fc94b7e2 0xc0016fe9a7 0xc0016fe9a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cpx2x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cpx2x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cpx2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:08:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:08:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:08:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:08:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 21:08:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:42.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7402" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":10,"skipped":131,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:42.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:08:42.503: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 17.256268ms) May 5 21:08:42.507: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.7203ms) May 5 21:08:42.511: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.595837ms) May 5 21:08:42.515: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.946809ms) May 5 21:08:42.518: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.824918ms) May 5 21:08:42.521: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.29133ms) May 5 21:08:42.523: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.514254ms) May 5 21:08:42.526: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.91549ms) May 5 21:08:42.556: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 29.675193ms) May 5 21:08:42.570: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 14.182806ms) May 5 21:08:42.823: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 252.850946ms) May 5 21:08:42.827: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.354544ms) May 5 21:08:42.833: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.459195ms) May 5 21:08:42.837: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.540194ms) May 5 21:08:42.841: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.043742ms) May 5 21:08:42.843: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.701211ms) May 5 21:08:42.846: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.660012ms) May 5 21:08:42.848: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.173021ms) May 5 21:08:42.851: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.38612ms) May 5 21:08:42.853: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.312798ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:42.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4061" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":11,"skipped":148,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:42.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 5 21:08:43.002: INFO: namespace kubectl-855 May 5 21:08:43.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-855' May 5 21:08:43.352: INFO: stderr: "" May 5 21:08:43.352: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 21:08:44.406: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:08:44.406: INFO: Found 0 / 1 May 5 21:08:45.548: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:08:45.548: INFO: Found 0 / 1 May 5 21:08:46.360: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:08:46.360: INFO: Found 0 / 1 May 5 21:08:47.356: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:08:47.356: INFO: Found 1 / 1 May 5 21:08:47.356: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 21:08:47.359: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:08:47.359: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 21:08:47.359: INFO: wait on agnhost-master startup in kubectl-855 May 5 21:08:47.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-tbthl agnhost-master --namespace=kubectl-855' May 5 21:08:47.465: INFO: stderr: "" May 5 21:08:47.465: INFO: stdout: "Paused\n" STEP: exposing RC May 5 21:08:47.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-855' May 5 21:08:47.711: INFO: stderr: "" May 5 21:08:47.711: INFO: stdout: "service/rm2 exposed\n" May 5 21:08:47.945: INFO: Service rm2 in namespace kubectl-855 found. STEP: exposing service May 5 21:08:49.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-855' May 5 21:08:50.116: INFO: stderr: "" May 5 21:08:50.116: INFO: stdout: "service/rm3 exposed\n" May 5 21:08:50.126: INFO: Service rm3 in namespace kubectl-855 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:52.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-855" for this suite. • [SLOW TEST:9.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":12,"skipped":149,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:52.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 21:08:52.201: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 21:08:52.218: INFO: Waiting for terminating namespaces to be deleted... May 5 21:08:52.221: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 21:08:52.253: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:08:52.253: INFO: Container kindnet-cni ready: true, restart count 0 May 5 21:08:52.253: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:08:52.253: INFO: Container kube-proxy ready: true, restart count 0 May 5 21:08:52.253: INFO: adopt-release-7xttm from job-971 started at 2020-05-05 21:08:14 +0000 UTC (1 container statuses recorded) May 5 21:08:52.253: INFO: Container c ready: true, restart count 0 May 5 21:08:52.253: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 21:08:52.260: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container kube-proxy ready: true, restart count 0 May 5 21:08:52.260: INFO: agnhost-master-tbthl from kubectl-855 started at 2020-05-05 21:08:43 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container agnhost-master ready: true, restart count 0 May 5 21:08:52.260: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container kube-hunter ready: false, restart count 0 May 5 21:08:52.260: INFO: adopt-release-45p59 from job-971 started at 2020-05-05 21:08:23 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container c ready: true, restart count 0 May 5 21:08:52.260: INFO: adopt-release-r5hnk from job-971 started at 2020-05-05 21:08:14 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container c ready: true, restart count 0 May 5 21:08:52.260: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container kindnet-cni ready: true, restart count 0 May 5 21:08:52.260: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 21:08:52.260: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c3dfe9a688c9c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:53.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3681" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":13,"skipped":152,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:53.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-99343575-09a2-4520-9a91-15381a992b51 STEP: Creating a pod to test consume configMaps May 5 21:08:53.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33" in namespace "projected-3970" to be "success or failure" May 5 21:08:53.398: INFO: Pod "pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33": Phase="Pending", Reason="", readiness=false. Elapsed: 17.969138ms May 5 21:08:55.418: INFO: Pod "pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038189356s May 5 21:08:57.421: INFO: Pod "pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041035881s STEP: Saw pod success May 5 21:08:57.421: INFO: Pod "pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33" satisfied condition "success or failure" May 5 21:08:57.426: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33 container projected-configmap-volume-test: STEP: delete the pod May 5 21:08:57.672: INFO: Waiting for pod pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33 to disappear May 5 21:08:57.705: INFO: Pod pod-projected-configmaps-5b216f3d-0ab8-4a7a-90bd-f8fb79a7bc33 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:08:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3970" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":158,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:08:57.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:08:58.568: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:09:00.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309738, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309738, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:09:03.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 5 21:09:03.654: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:09:03.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2779" for this suite. STEP: Destroying namespace "webhook-2779-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.131 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":15,"skipped":172,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:09:03.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7963 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7963 STEP: creating replication controller externalsvc in namespace services-7963 I0505 21:09:04.484037 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7963, replica count: 2 I0505 21:09:07.534468 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 21:09:10.534736 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 5 21:09:10.577: INFO: Creating new exec pod May 5 21:09:14.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7963 execpodcsnmh -- /bin/sh -x -c nslookup clusterip-service' May 5 21:09:14.980: INFO: stderr: "I0505 21:09:14.763207 190 log.go:172] (0xc000104dc0) (0xc00068dea0) Create stream\nI0505 21:09:14.763284 190 log.go:172] (0xc000104dc0) (0xc00068dea0) Stream added, broadcasting: 1\nI0505 21:09:14.767009 190 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0505 21:09:14.767054 190 log.go:172] (0xc000104dc0) (0xc000638780) Create stream\nI0505 21:09:14.767065 190 log.go:172] (0xc000104dc0) (0xc000638780) Stream added, broadcasting: 3\nI0505 21:09:14.768152 190 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0505 21:09:14.768199 190 log.go:172] (0xc000104dc0) (0xc00068df40) Create stream\nI0505 21:09:14.768220 190 log.go:172] (0xc000104dc0) (0xc00068df40) Stream added, broadcasting: 5\nI0505 21:09:14.769679 190 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0505 21:09:14.925567 190 log.go:172] (0xc000104dc0) Data frame received for 5\nI0505 21:09:14.925595 190 log.go:172] (0xc00068df40) (5) Data frame handling\nI0505 21:09:14.925638 190 log.go:172] (0xc00068df40) (5) Data frame sent\n+ nslookup clusterip-service\nI0505 21:09:14.972954 190 log.go:172] (0xc000104dc0) Data frame received for 3\nI0505 21:09:14.972987 190 log.go:172] (0xc000638780) (3) Data frame handling\nI0505 21:09:14.973038 190 log.go:172] (0xc000638780) (3) Data frame sent\nI0505 21:09:14.973991 190 log.go:172] (0xc000104dc0) Data frame received for 3\nI0505 21:09:14.974007 190 log.go:172] (0xc000638780) (3) Data frame handling\nI0505 21:09:14.974024 190 log.go:172] (0xc000638780) (3) Data frame sent\nI0505 21:09:14.974358 190 log.go:172] (0xc000104dc0) Data frame received for 3\nI0505 21:09:14.974373 190 log.go:172] (0xc000638780) (3) Data frame handling\nI0505 21:09:14.974627 190 log.go:172] (0xc000104dc0) Data frame received for 5\nI0505 21:09:14.974643 190 log.go:172] (0xc00068df40) (5) Data frame handling\nI0505 21:09:14.976285 190 log.go:172] (0xc000104dc0) Data frame received for 1\nI0505 21:09:14.976305 190 log.go:172] (0xc00068dea0) (1) Data frame handling\nI0505 21:09:14.976324 190 log.go:172] (0xc00068dea0) (1) Data frame sent\nI0505 21:09:14.976343 190 log.go:172] (0xc000104dc0) (0xc00068dea0) Stream removed, broadcasting: 1\nI0505 21:09:14.976358 190 log.go:172] (0xc000104dc0) Go away received\nI0505 21:09:14.976747 190 log.go:172] (0xc000104dc0) (0xc00068dea0) Stream removed, broadcasting: 1\nI0505 21:09:14.976761 190 log.go:172] (0xc000104dc0) (0xc000638780) Stream removed, broadcasting: 3\nI0505 21:09:14.976766 190 log.go:172] (0xc000104dc0) (0xc00068df40) Stream removed, broadcasting: 5\n" May 5 21:09:14.981: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7963.svc.cluster.local\tcanonical name = externalsvc.services-7963.svc.cluster.local.\nName:\texternalsvc.services-7963.svc.cluster.local\nAddress: 10.107.13.225\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7963, will wait for the garbage collector to delete the pods May 5 21:09:15.039: INFO: Deleting ReplicationController externalsvc took: 4.50942ms May 5 21:09:15.139: INFO: Terminating ReplicationController externalsvc pods took: 100.253506ms May 5 21:09:29.599: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:09:29.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7963" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.736 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":16,"skipped":183,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:09:29.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:09:29.748: INFO: Waiting up to 5m0s for pod "downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198" in namespace "projected-5742" to be "success or failure" May 5 21:09:29.784: INFO: Pod "downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198": Phase="Pending", Reason="", readiness=false. Elapsed: 35.392841ms May 5 21:09:31.801: INFO: Pod "downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052940252s May 5 21:09:33.874: INFO: Pod "downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125683758s STEP: Saw pod success May 5 21:09:33.874: INFO: Pod "downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198" satisfied condition "success or failure" May 5 21:09:33.878: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198 container client-container: STEP: delete the pod May 5 21:09:34.019: INFO: Waiting for pod downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198 to disappear May 5 21:09:34.041: INFO: Pod downwardapi-volume-347e4591-301b-4fb2-869a-bc438a26f198 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:09:34.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5742" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":187,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:09:34.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3646.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3646.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 21:09:40.624: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.639: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.642: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.645: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.655: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.658: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.661: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.664: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:40.670: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:09:45.676: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.679: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.684: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.687: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.697: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.700: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.703: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.707: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:45.712: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:09:50.675: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.680: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.683: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.686: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.695: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.698: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.701: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.704: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:50.710: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:09:55.675: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.679: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.681: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.684: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.691: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.693: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.696: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.700: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:09:55.706: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:10:00.675: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.679: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.682: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.686: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.695: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.698: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.701: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.705: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:00.712: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:10:05.675: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.679: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.683: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.686: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.695: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.698: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.702: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.704: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local from pod dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9: the server could not find the requested resource (get pods dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9) May 5 21:10:05.710: INFO: Lookups using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3646.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3646.svc.cluster.local jessie_udp@dns-test-service-2.dns-3646.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3646.svc.cluster.local] May 5 21:10:10.705: INFO: DNS probes using dns-3646/dns-test-209e3cf1-7b9d-4b62-ab43-30e072c76ca9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:10:10.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3646" for this suite. • [SLOW TEST:37.143 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":18,"skipped":192,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:10:11.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4755ebd2-6fb9-4dc6-b09b-1f9f35650931 STEP: Creating a pod to test consume configMaps May 5 21:10:11.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8" in namespace "configmap-63" to be "success or failure" May 5 21:10:11.533: INFO: Pod "pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 22.375201ms May 5 21:10:13.551: INFO: Pod "pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04009078s May 5 21:10:15.555: INFO: Pod "pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044464492s STEP: Saw pod success May 5 21:10:15.555: INFO: Pod "pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8" satisfied condition "success or failure" May 5 21:10:15.558: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8 container configmap-volume-test: STEP: delete the pod May 5 21:10:15.607: INFO: Waiting for pod pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8 to disappear May 5 21:10:15.617: INFO: Pod pod-configmaps-859a5269-c1d9-488e-ac8d-7d1361d7b9e8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:10:15.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-63" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:10:15.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 5 21:10:16.120: INFO: Waiting up to 5m0s for pod "pod-79e4e776-cfd9-4935-a078-17b10026f066" in namespace "emptydir-7600" to be "success or failure" May 5 21:10:16.126: INFO: Pod "pod-79e4e776-cfd9-4935-a078-17b10026f066": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317433ms May 5 21:10:18.130: INFO: Pod "pod-79e4e776-cfd9-4935-a078-17b10026f066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01010531s May 5 21:10:20.134: INFO: Pod "pod-79e4e776-cfd9-4935-a078-17b10026f066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01422343s STEP: Saw pod success May 5 21:10:20.134: INFO: Pod "pod-79e4e776-cfd9-4935-a078-17b10026f066" satisfied condition "success or failure" May 5 21:10:20.137: INFO: Trying to get logs from node jerma-worker pod pod-79e4e776-cfd9-4935-a078-17b10026f066 container test-container: STEP: delete the pod May 5 21:10:20.155: INFO: Waiting for pod pod-79e4e776-cfd9-4935-a078-17b10026f066 to disappear May 5 21:10:20.179: INFO: Pod pod-79e4e776-cfd9-4935-a078-17b10026f066 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:10:20.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7600" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:10:20.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 5 21:10:20.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3987' May 5 21:10:20.559: INFO: stderr: "" May 5 21:10:20.559: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:10:20.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3987' May 5 21:10:20.705: INFO: stderr: "" May 5 21:10:20.705: INFO: stdout: "update-demo-nautilus-kxzfl update-demo-nautilus-z8clf " May 5 21:10:20.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxzfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:20.814: INFO: stderr: "" May 5 21:10:20.814: INFO: stdout: "" May 5 21:10:20.814: INFO: update-demo-nautilus-kxzfl is created but not running May 5 21:10:25.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3987' May 5 21:10:25.919: INFO: stderr: "" May 5 21:10:25.919: INFO: stdout: "update-demo-nautilus-kxzfl update-demo-nautilus-z8clf " May 5 21:10:25.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxzfl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:26.014: INFO: stderr: "" May 5 21:10:26.014: INFO: stdout: "true" May 5 21:10:26.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxzfl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:26.101: INFO: stderr: "" May 5 21:10:26.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:10:26.101: INFO: validating pod update-demo-nautilus-kxzfl May 5 21:10:26.114: INFO: got data: { "image": "nautilus.jpg" } May 5 21:10:26.114: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:10:26.114: INFO: update-demo-nautilus-kxzfl is verified up and running May 5 21:10:26.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8clf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:26.209: INFO: stderr: "" May 5 21:10:26.209: INFO: stdout: "true" May 5 21:10:26.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z8clf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:26.307: INFO: stderr: "" May 5 21:10:26.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:10:26.308: INFO: validating pod update-demo-nautilus-z8clf May 5 21:10:26.311: INFO: got data: { "image": "nautilus.jpg" } May 5 21:10:26.311: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:10:26.311: INFO: update-demo-nautilus-z8clf is verified up and running STEP: rolling-update to new replication controller May 5 21:10:26.313: INFO: scanned /root for discovery docs: May 5 21:10:26.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3987' May 5 21:10:48.978: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 5 21:10:48.978: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:10:48.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3987' May 5 21:10:49.085: INFO: stderr: "" May 5 21:10:49.085: INFO: stdout: "update-demo-kitten-srkhs update-demo-kitten-zcc2j update-demo-nautilus-kxzfl " STEP: Replicas for name=update-demo: expected=2 actual=3 May 5 21:10:54.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3987' May 5 21:10:54.190: INFO: stderr: "" May 5 21:10:54.190: INFO: stdout: "update-demo-kitten-srkhs update-demo-kitten-zcc2j " May 5 21:10:54.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-srkhs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:54.279: INFO: stderr: "" May 5 21:10:54.279: INFO: stdout: "true" May 5 21:10:54.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-srkhs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:54.380: INFO: stderr: "" May 5 21:10:54.380: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 5 21:10:54.380: INFO: validating pod update-demo-kitten-srkhs May 5 21:10:54.384: INFO: got data: { "image": "kitten.jpg" } May 5 21:10:54.384: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 5 21:10:54.384: INFO: update-demo-kitten-srkhs is verified up and running May 5 21:10:54.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zcc2j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:54.477: INFO: stderr: "" May 5 21:10:54.477: INFO: stdout: "true" May 5 21:10:54.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zcc2j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3987' May 5 21:10:54.576: INFO: stderr: "" May 5 21:10:54.577: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 5 21:10:54.577: INFO: validating pod update-demo-kitten-zcc2j May 5 21:10:54.582: INFO: got data: { "image": "kitten.jpg" } May 5 21:10:54.582: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 5 21:10:54.582: INFO: update-demo-kitten-zcc2j is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:10:54.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3987" for this suite. • [SLOW TEST:34.401 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":21,"skipped":251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:10:54.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-346d9764-5598-4f1c-a711-b13656fd74d1 STEP: Creating a pod to test consume configMaps May 5 21:10:54.667: INFO: Waiting up to 5m0s for pod "pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b" in namespace "configmap-8269" to be "success or failure" May 5 21:10:54.719: INFO: Pod "pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b": Phase="Pending", Reason="", readiness=false. Elapsed: 52.168017ms May 5 21:10:56.834: INFO: Pod "pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166822487s May 5 21:10:58.838: INFO: Pod "pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170417715s STEP: Saw pod success May 5 21:10:58.838: INFO: Pod "pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b" satisfied condition "success or failure" May 5 21:10:58.840: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b container configmap-volume-test: STEP: delete the pod May 5 21:10:58.971: INFO: Waiting for pod pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b to disappear May 5 21:10:59.000: INFO: Pod pod-configmaps-e44a4fe6-a3a9-4ea7-b3a6-78877a4d570b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:10:59.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8269" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":298,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:10:59.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 5 21:10:59.086: INFO: Waiting up to 5m0s for pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148" in namespace "emptydir-9743" to be "success or failure" May 5 21:10:59.090: INFO: Pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148": Phase="Pending", Reason="", readiness=false. Elapsed: 3.90923ms May 5 21:11:01.094: INFO: Pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007452115s May 5 21:11:03.128: INFO: Pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042046903s May 5 21:11:05.131: INFO: Pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045158819s STEP: Saw pod success May 5 21:11:05.131: INFO: Pod "pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148" satisfied condition "success or failure" May 5 21:11:05.134: INFO: Trying to get logs from node jerma-worker2 pod pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148 container test-container: STEP: delete the pod May 5 21:11:05.189: INFO: Waiting for pod pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148 to disappear May 5 21:11:05.193: INFO: Pod pod-c64e94ee-4811-49d1-98cf-29e6d4c7b148 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:11:05.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9743" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":311,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:11:05.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:11:05.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5151' May 5 21:11:05.346: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 21:11:05.346: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 5 21:11:09.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5151' May 5 21:11:09.478: INFO: stderr: "" May 5 21:11:09.478: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:11:09.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5151" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":24,"skipped":321,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:11:09.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:11:15.654: INFO: Waiting up to 5m0s for pod "client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283" in namespace "pods-3618" to be "success or failure" May 5 21:11:15.697: INFO: Pod "client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283": Phase="Pending", Reason="", readiness=false. Elapsed: 42.935899ms May 5 21:11:17.700: INFO: Pod "client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046255374s May 5 21:11:19.704: INFO: Pod "client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05024458s STEP: Saw pod success May 5 21:11:19.704: INFO: Pod "client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283" satisfied condition "success or failure" May 5 21:11:19.707: INFO: Trying to get logs from node jerma-worker pod client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283 container env3cont: STEP: delete the pod May 5 21:11:19.730: INFO: Waiting for pod client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283 to disappear May 5 21:11:19.733: INFO: Pod client-envvars-b7ac31c2-1d11-4aa4-b0c8-3a173d1c2283 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:11:19.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3618" for this suite. • [SLOW TEST:10.254 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:11:19.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 5 21:11:19.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8623' May 5 21:11:20.070: INFO: stderr: "" May 5 21:11:20.071: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:11:20.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:20.191: INFO: stderr: "" May 5 21:11:20.191: INFO: stdout: "update-demo-nautilus-gjnd7 update-demo-nautilus-kxfjp " May 5 21:11:20.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:20.300: INFO: stderr: "" May 5 21:11:20.300: INFO: stdout: "" May 5 21:11:20.300: INFO: update-demo-nautilus-gjnd7 is created but not running May 5 21:11:25.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:25.405: INFO: stderr: "" May 5 21:11:25.405: INFO: stdout: "update-demo-nautilus-gjnd7 update-demo-nautilus-kxfjp " May 5 21:11:25.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:25.493: INFO: stderr: "" May 5 21:11:25.493: INFO: stdout: "true" May 5 21:11:25.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:25.591: INFO: stderr: "" May 5 21:11:25.591: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:11:25.591: INFO: validating pod update-demo-nautilus-gjnd7 May 5 21:11:25.595: INFO: got data: { "image": "nautilus.jpg" } May 5 21:11:25.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:11:25.595: INFO: update-demo-nautilus-gjnd7 is verified up and running May 5 21:11:25.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxfjp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:25.691: INFO: stderr: "" May 5 21:11:25.691: INFO: stdout: "true" May 5 21:11:25.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kxfjp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:25.777: INFO: stderr: "" May 5 21:11:25.777: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:11:25.778: INFO: validating pod update-demo-nautilus-kxfjp May 5 21:11:25.782: INFO: got data: { "image": "nautilus.jpg" } May 5 21:11:25.782: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:11:25.782: INFO: update-demo-nautilus-kxfjp is verified up and running STEP: scaling down the replication controller May 5 21:11:25.784: INFO: scanned /root for discovery docs: May 5 21:11:25.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8623' May 5 21:11:26.948: INFO: stderr: "" May 5 21:11:26.948: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:11:26.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:27.054: INFO: stderr: "" May 5 21:11:27.054: INFO: stdout: "update-demo-nautilus-gjnd7 update-demo-nautilus-kxfjp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 5 21:11:32.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:32.170: INFO: stderr: "" May 5 21:11:32.170: INFO: stdout: "update-demo-nautilus-gjnd7 update-demo-nautilus-kxfjp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 5 21:11:37.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:37.273: INFO: stderr: "" May 5 21:11:37.273: INFO: stdout: "update-demo-nautilus-gjnd7 update-demo-nautilus-kxfjp " STEP: Replicas for name=update-demo: expected=1 actual=2 May 5 21:11:42.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:42.372: INFO: stderr: "" May 5 21:11:42.372: INFO: stdout: "update-demo-nautilus-gjnd7 " May 5 21:11:42.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:42.465: INFO: stderr: "" May 5 21:11:42.465: INFO: stdout: "true" May 5 21:11:42.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:42.565: INFO: stderr: "" May 5 21:11:42.565: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:11:42.565: INFO: validating pod update-demo-nautilus-gjnd7 May 5 21:11:42.569: INFO: got data: { "image": "nautilus.jpg" } May 5 21:11:42.569: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:11:42.569: INFO: update-demo-nautilus-gjnd7 is verified up and running STEP: scaling up the replication controller May 5 21:11:42.571: INFO: scanned /root for discovery docs: May 5 21:11:42.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8623' May 5 21:11:43.685: INFO: stderr: "" May 5 21:11:43.685: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:11:43.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:43.788: INFO: stderr: "" May 5 21:11:43.788: INFO: stdout: "update-demo-nautilus-fvs4x update-demo-nautilus-gjnd7 " May 5 21:11:43.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvs4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:43.890: INFO: stderr: "" May 5 21:11:43.890: INFO: stdout: "" May 5 21:11:43.890: INFO: update-demo-nautilus-fvs4x is created but not running May 5 21:11:48.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8623' May 5 21:11:49.030: INFO: stderr: "" May 5 21:11:49.030: INFO: stdout: "update-demo-nautilus-fvs4x update-demo-nautilus-gjnd7 " May 5 21:11:49.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvs4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:49.125: INFO: stderr: "" May 5 21:11:49.125: INFO: stdout: "true" May 5 21:11:49.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvs4x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:49.239: INFO: stderr: "" May 5 21:11:49.239: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:11:49.239: INFO: validating pod update-demo-nautilus-fvs4x May 5 21:11:49.243: INFO: got data: { "image": "nautilus.jpg" } May 5 21:11:49.243: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:11:49.243: INFO: update-demo-nautilus-fvs4x is verified up and running May 5 21:11:49.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:49.326: INFO: stderr: "" May 5 21:11:49.326: INFO: stdout: "true" May 5 21:11:49.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gjnd7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8623' May 5 21:11:49.428: INFO: stderr: "" May 5 21:11:49.428: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:11:49.428: INFO: validating pod update-demo-nautilus-gjnd7 May 5 21:11:49.431: INFO: got data: { "image": "nautilus.jpg" } May 5 21:11:49.431: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:11:49.431: INFO: update-demo-nautilus-gjnd7 is verified up and running STEP: using delete to clean up resources May 5 21:11:49.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8623' May 5 21:11:49.563: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 21:11:49.563: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 5 21:11:49.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8623' May 5 21:11:49.678: INFO: stderr: "No resources found in kubectl-8623 namespace.\n" May 5 21:11:49.678: INFO: stdout: "" May 5 21:11:49.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8623 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 21:11:49.776: INFO: stderr: "" May 5 21:11:49.776: INFO: stdout: "update-demo-nautilus-fvs4x\nupdate-demo-nautilus-gjnd7\n" May 5 21:11:50.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8623' May 5 21:11:50.380: INFO: stderr: "No resources found in kubectl-8623 namespace.\n" May 5 21:11:50.380: INFO: stdout: "" May 5 21:11:50.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8623 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 21:11:50.476: INFO: stderr: "" May 5 21:11:50.476: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:11:50.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8623" for this suite. • [SLOW TEST:30.742 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":26,"skipped":351,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:11:50.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0505 21:12:21.376483 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 21:12:21.376: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:21.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9464" for this suite. • [SLOW TEST:30.901 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":27,"skipped":353,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:21.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:12:21.909: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:12:23.966: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309941, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309941, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309942, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724309941, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:12:27.026: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:27.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3614" for this suite. STEP: Destroying namespace "webhook-3614-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.385 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":28,"skipped":361,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:27.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 21:12:32.642: INFO: Successfully updated pod "annotationupdate41d0662e-7c29-4cc6-9113-141d55463dfd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:36.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7278" for this suite. • [SLOW TEST:8.925 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":363,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:36.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:40.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1851" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":365,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:40.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:12:40.932: INFO: Creating ReplicaSet my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925 May 5 21:12:40.944: INFO: Pod name my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925: Found 0 pods out of 1 May 5 21:12:45.947: INFO: Pod name my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925: Found 1 pods out of 1 May 5 21:12:45.947: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925" is running May 5 21:12:45.972: INFO: Pod "my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925-4hf82" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:12:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:12:43 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:12:43 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:12:40 +0000 UTC Reason: Message:}]) May 5 21:12:45.972: INFO: Trying to dial the pod May 5 21:12:50.984: INFO: Controller my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925: Got expected result from replica 1 [my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925-4hf82]: "my-hostname-basic-b172d6d8-dbbe-42d8-a7ff-4fdc190e0925-4hf82", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:50.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9337" for this suite. • [SLOW TEST:10.134 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":31,"skipped":366,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:50.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 5 21:12:51.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4370' May 5 21:12:51.382: INFO: stderr: "" May 5 21:12:51.382: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 21:12:52.387: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:52.387: INFO: Found 0 / 1 May 5 21:12:53.388: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:53.388: INFO: Found 0 / 1 May 5 21:12:54.387: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:54.387: INFO: Found 0 / 1 May 5 21:12:55.386: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:55.386: INFO: Found 1 / 1 May 5 21:12:55.387: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 5 21:12:55.390: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:55.390: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 21:12:55.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-x4rmp --namespace=kubectl-4370 -p {"metadata":{"annotations":{"x":"y"}}}' May 5 21:12:55.508: INFO: stderr: "" May 5 21:12:55.508: INFO: stdout: "pod/agnhost-master-x4rmp patched\n" STEP: checking annotations May 5 21:12:55.534: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:12:55.534: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:55.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4370" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":32,"skipped":368,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:55.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 21:12:55.683: INFO: Waiting up to 5m0s for pod "downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884" in namespace "downward-api-9266" to be "success or failure" May 5 21:12:55.686: INFO: Pod "downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762106ms May 5 21:12:57.691: INFO: Pod "downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007986498s May 5 21:12:59.695: INFO: Pod "downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012758591s STEP: Saw pod success May 5 21:12:59.695: INFO: Pod "downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884" satisfied condition "success or failure" May 5 21:12:59.699: INFO: Trying to get logs from node jerma-worker pod downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884 container dapi-container: STEP: delete the pod May 5 21:12:59.718: INFO: Waiting for pod downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884 to disappear May 5 21:12:59.757: INFO: Pod downward-api-f06a3e12-71ae-416a-8d12-ec40a11f6884 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:12:59.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9266" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":371,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:12:59.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:12:59.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1851' May 5 21:12:59.929: INFO: stderr: "" May 5 21:12:59.929: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 5 21:13:04.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1851 -o json' May 5 21:13:05.076: INFO: stderr: "" May 5 21:13:05.076: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-05T21:12:59Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1851\",\n \"resourceVersion\": \"13673048\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1851/pods/e2e-test-httpd-pod\",\n \"uid\": \"c420fcff-a4ef-4309-85d3-347a617a8b0a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-n8wsk\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-n8wsk\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-n8wsk\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T21:12:59Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T21:13:02Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T21:13:02Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-05T21:12:59Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e0dd9082d387ff07d942b52079c0934a5e0433eab022e934471fd9910916dbe8\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-05T21:13:02Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.81\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.81\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-05T21:12:59Z\"\n }\n}\n" STEP: replace the image in the pod May 5 21:13:05.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1851' May 5 21:13:05.317: INFO: stderr: "" May 5 21:13:05.317: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 5 21:13:05.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1851' May 5 21:13:19.516: INFO: stderr: "" May 5 21:13:19.516: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:13:19.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1851" for this suite. • [SLOW TEST:19.762 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":34,"skipped":396,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:13:19.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7fbecaca-8abf-4442-bfc3-09740d77f831 STEP: Creating a pod to test consume secrets May 5 21:13:19.608: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59" in namespace "projected-5108" to be "success or failure" May 5 21:13:19.612: INFO: Pod "pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59": Phase="Pending", Reason="", readiness=false. Elapsed: 3.861043ms May 5 21:13:21.616: INFO: Pod "pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008078918s May 5 21:13:23.619: INFO: Pod "pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011100909s STEP: Saw pod success May 5 21:13:23.619: INFO: Pod "pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59" satisfied condition "success or failure" May 5 21:13:23.622: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59 container projected-secret-volume-test: STEP: delete the pod May 5 21:13:23.647: INFO: Waiting for pod pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59 to disappear May 5 21:13:23.651: INFO: Pod pod-projected-secrets-8a46c9ba-dd15-4ab2-8e60-a2b35c96ac59 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:13:23.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5108" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":401,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:13:23.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 21:13:23.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:23.795: INFO: Number of nodes with available pods: 0 May 5 21:13:23.795: INFO: Node jerma-worker is running more than one daemon pod May 5 21:13:24.799: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:24.803: INFO: Number of nodes with available pods: 0 May 5 21:13:24.803: INFO: Node jerma-worker is running more than one daemon pod May 5 21:13:25.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:25.947: INFO: Number of nodes with available pods: 0 May 5 21:13:25.947: INFO: Node jerma-worker is running more than one daemon pod May 5 21:13:26.950: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:26.954: INFO: Number of nodes with available pods: 0 May 5 21:13:26.954: INFO: Node jerma-worker is running more than one daemon pod May 5 21:13:27.800: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:27.804: INFO: Number of nodes with available pods: 0 May 5 21:13:27.804: INFO: Node jerma-worker is running more than one daemon pod May 5 21:13:28.836: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:28.850: INFO: Number of nodes with available pods: 2 May 5 21:13:28.850: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 5 21:13:28.924: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:28.927: INFO: Number of nodes with available pods: 1 May 5 21:13:28.927: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:29.933: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:29.936: INFO: Number of nodes with available pods: 1 May 5 21:13:29.937: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:30.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:30.948: INFO: Number of nodes with available pods: 1 May 5 21:13:30.948: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:32.005: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:32.008: INFO: Number of nodes with available pods: 1 May 5 21:13:32.008: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:32.932: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:32.936: INFO: Number of nodes with available pods: 1 May 5 21:13:32.936: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:33.932: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:33.935: INFO: Number of nodes with available pods: 1 May 5 21:13:33.935: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:13:34.931: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:13:34.933: INFO: Number of nodes with available pods: 2 May 5 21:13:34.933: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6782, will wait for the garbage collector to delete the pods May 5 21:13:34.995: INFO: Deleting DaemonSet.extensions daemon-set took: 7.027339ms May 5 21:13:35.295: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.324451ms May 5 21:13:49.299: INFO: Number of nodes with available pods: 0 May 5 21:13:49.299: INFO: Number of running nodes: 0, number of available pods: 0 May 5 21:13:49.306: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6782/daemonsets","resourceVersion":"13673307"},"items":null} May 5 21:13:49.309: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6782/pods","resourceVersion":"13673307"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:13:49.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6782" for this suite. • [SLOW TEST:25.641 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":36,"skipped":406,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:13:49.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-4w85 STEP: Creating a pod to test atomic-volume-subpath May 5 21:13:49.423: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4w85" in namespace "subpath-2448" to be "success or failure" May 5 21:13:49.438: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Pending", Reason="", readiness=false. Elapsed: 15.401281ms May 5 21:13:51.442: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019261148s May 5 21:13:53.447: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 4.023906723s May 5 21:13:55.451: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 6.028034815s May 5 21:13:57.455: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 8.031933385s May 5 21:13:59.458: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 10.035359855s May 5 21:14:01.463: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 12.039981445s May 5 21:14:03.467: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 14.044157795s May 5 21:14:05.471: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 16.048366574s May 5 21:14:07.475: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 18.051954188s May 5 21:14:09.479: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 20.056241003s May 5 21:14:11.484: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Running", Reason="", readiness=true. Elapsed: 22.060830602s May 5 21:14:13.488: INFO: Pod "pod-subpath-test-configmap-4w85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064663472s STEP: Saw pod success May 5 21:14:13.488: INFO: Pod "pod-subpath-test-configmap-4w85" satisfied condition "success or failure" May 5 21:14:13.490: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-4w85 container test-container-subpath-configmap-4w85: STEP: delete the pod May 5 21:14:13.544: INFO: Waiting for pod pod-subpath-test-configmap-4w85 to disappear May 5 21:14:13.554: INFO: Pod pod-subpath-test-configmap-4w85 no longer exists STEP: Deleting pod pod-subpath-test-configmap-4w85 May 5 21:14:13.554: INFO: Deleting pod "pod-subpath-test-configmap-4w85" in namespace "subpath-2448" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:14:13.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2448" for this suite. • [SLOW TEST:24.238 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":37,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:14:13.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:14:13.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8674' May 5 21:14:13.727: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 21:14:13.727: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 5 21:14:15.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8674' May 5 21:14:15.899: INFO: stderr: "" May 5 21:14:15.899: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:14:15.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8674" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":38,"skipped":432,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:14:15.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-828aee74-c308-40b7-94ac-4cbc4d64438a in namespace container-probe-2153 May 5 21:14:20.086: INFO: Started pod liveness-828aee74-c308-40b7-94ac-4cbc4d64438a in namespace container-probe-2153 STEP: checking the pod's current state and verifying that restartCount is present May 5 21:14:20.090: INFO: Initial restart count of pod liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is 0 May 5 21:14:40.162: INFO: Restart count of pod container-probe-2153/liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is now 1 (20.072410769s elapsed) May 5 21:15:00.206: INFO: Restart count of pod container-probe-2153/liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is now 2 (40.115949025s elapsed) May 5 21:15:20.248: INFO: Restart count of pod container-probe-2153/liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is now 3 (1m0.157901569s elapsed) May 5 21:15:40.339: INFO: Restart count of pod container-probe-2153/liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is now 4 (1m20.249686942s elapsed) May 5 21:16:39.783: INFO: Restart count of pod container-probe-2153/liveness-828aee74-c308-40b7-94ac-4cbc4d64438a is now 5 (2m19.693620382s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:16:39.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2153" for this suite. • [SLOW TEST:143.942 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":446,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:16:39.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 5 21:16:40.210: INFO: Pod name pod-release: Found 0 pods out of 1 May 5 21:16:45.246: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:16:46.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3387" for this suite. • [SLOW TEST:6.414 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":40,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:16:46.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:16:46.342: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 21:16:49.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1331 create -f -' May 5 21:16:53.753: INFO: stderr: "" May 5 21:16:53.753: INFO: stdout: "e2e-test-crd-publish-openapi-3637-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 5 21:16:53.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1331 delete e2e-test-crd-publish-openapi-3637-crds test-cr' May 5 21:16:53.881: INFO: stderr: "" May 5 21:16:53.881: INFO: stdout: "e2e-test-crd-publish-openapi-3637-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 5 21:16:53.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1331 apply -f -' May 5 21:16:54.157: INFO: stderr: "" May 5 21:16:54.157: INFO: stdout: "e2e-test-crd-publish-openapi-3637-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 5 21:16:54.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1331 delete e2e-test-crd-publish-openapi-3637-crds test-cr' May 5 21:16:54.281: INFO: stderr: "" May 5 21:16:54.281: INFO: stdout: "e2e-test-crd-publish-openapi-3637-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 5 21:16:54.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3637-crds' May 5 21:16:54.491: INFO: stderr: "" May 5 21:16:54.491: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3637-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:16:56.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1331" for this suite. • [SLOW TEST:10.146 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":41,"skipped":511,"failed":0} SSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:16:56.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 5 21:16:56.468: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 5 21:16:57.102: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 5 21:16:59.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:17:01.738: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310217, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:17:04.255: INFO: Waited 523.31531ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:04.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9401" for this suite. • [SLOW TEST:8.695 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":42,"skipped":514,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:05.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:17:05.328: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 5 21:17:05.489: INFO: Pod name sample-pod: Found 0 pods out of 1 May 5 21:17:10.508: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 21:17:10.508: INFO: Creating deployment "test-rolling-update-deployment" May 5 21:17:10.513: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 5 21:17:10.525: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 5 21:17:12.534: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 5 21:17:12.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:17:14.541: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 21:17:14.570: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2983 /apis/apps/v1/namespaces/deployment-2983/deployments/test-rolling-update-deployment 02e3a0cf-c2e7-48c6-ab89-a35a0875c4f8 13674222 1 2020-05-05 21:17:10 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025b8e98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-05 21:17:10 +0000 UTC,LastTransitionTime:2020-05-05 21:17:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-05 21:17:13 +0000 UTC,LastTransitionTime:2020-05-05 21:17:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 5 21:17:14.572: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2983 /apis/apps/v1/namespaces/deployment-2983/replicasets/test-rolling-update-deployment-67cf4f6444 8f576cd5-4e60-46bb-ab11-8e62704fd4e8 13674211 1 2020-05-05 21:17:10 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 02e3a0cf-c2e7-48c6-ab89-a35a0875c4f8 0xc0025b9777 0xc0025b9778}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025b98a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 21:17:14.573: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 5 21:17:14.573: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2983 /apis/apps/v1/namespaces/deployment-2983/replicasets/test-rolling-update-controller 708e6041-6e20-4332-8980-c64652e8e0c6 13674220 2 2020-05-05 21:17:05 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 02e3a0cf-c2e7-48c6-ab89-a35a0875c4f8 0xc0025b95e7 0xc0025b95e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0025b9668 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 21:17:14.576: INFO: Pod "test-rolling-update-deployment-67cf4f6444-bt5wd" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-bt5wd test-rolling-update-deployment-67cf4f6444- deployment-2983 /api/v1/namespaces/deployment-2983/pods/test-rolling-update-deployment-67cf4f6444-bt5wd 3938131f-f9fe-43ef-8f06-1789c0326ccf 13674210 0 2020-05-05 21:17:10 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 8f576cd5-4e60-46bb-ab11-8e62704fd4e8 0xc002644507 0xc002644508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qfjlz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qfjlz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qfjlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:17:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:17:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:17:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 21:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.146,StartTime:2020-05-05 21:17:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 21:17:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2aa13c072b1a669c24cd469d0989f09f18554974ba8652d8fb94d0da45f9b05b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.146,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:14.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2983" for this suite. • [SLOW TEST:9.468 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":43,"skipped":525,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:14.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:17:14.641: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-3b9aabc2-f8cc-4032-8436-5d7156c607a2" in namespace "security-context-test-7584" to be "success or failure" May 5 21:17:14.646: INFO: Pod "busybox-readonly-false-3b9aabc2-f8cc-4032-8436-5d7156c607a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.394763ms May 5 21:17:16.671: INFO: Pod "busybox-readonly-false-3b9aabc2-f8cc-4032-8436-5d7156c607a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030071258s May 5 21:17:18.824: INFO: Pod "busybox-readonly-false-3b9aabc2-f8cc-4032-8436-5d7156c607a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182984173s May 5 21:17:18.824: INFO: Pod "busybox-readonly-false-3b9aabc2-f8cc-4032-8436-5d7156c607a2" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:18.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7584" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":554,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:18.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:17:19.051: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df" in namespace "downward-api-2351" to be "success or failure" May 5 21:17:19.063: INFO: Pod "downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df": Phase="Pending", Reason="", readiness=false. Elapsed: 12.112295ms May 5 21:17:21.067: INFO: Pod "downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016351124s May 5 21:17:23.071: INFO: Pod "downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020327537s STEP: Saw pod success May 5 21:17:23.071: INFO: Pod "downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df" satisfied condition "success or failure" May 5 21:17:23.075: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df container client-container: STEP: delete the pod May 5 21:17:23.114: INFO: Waiting for pod downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df to disappear May 5 21:17:23.118: INFO: Pod downwardapi-volume-fe3b15e1-3449-4c05-8f67-f05d462676df no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:23.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2351" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":555,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:23.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-9541 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9541 to expose endpoints map[] May 5 21:17:23.243: INFO: Get endpoints failed (3.139931ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 5 21:17:24.248: INFO: successfully validated that service endpoint-test2 in namespace services-9541 exposes endpoints map[] (1.008001962s elapsed) STEP: Creating pod pod1 in namespace services-9541 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9541 to expose endpoints map[pod1:[80]] May 5 21:17:28.322: INFO: successfully validated that service endpoint-test2 in namespace services-9541 exposes endpoints map[pod1:[80]] (4.067447294s elapsed) STEP: Creating pod pod2 in namespace services-9541 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9541 to expose endpoints map[pod1:[80] pod2:[80]] May 5 21:17:31.539: INFO: successfully validated that service endpoint-test2 in namespace services-9541 exposes endpoints map[pod1:[80] pod2:[80]] (3.212726708s elapsed) STEP: Deleting pod pod1 in namespace services-9541 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9541 to expose endpoints map[pod2:[80]] May 5 21:17:32.580: INFO: successfully validated that service endpoint-test2 in namespace services-9541 exposes endpoints map[pod2:[80]] (1.03710558s elapsed) STEP: Deleting pod pod2 in namespace services-9541 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9541 to expose endpoints map[] May 5 21:17:33.756: INFO: successfully validated that service endpoint-test2 in namespace services-9541 exposes endpoints map[] (1.170539374s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:33.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9541" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.827 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":46,"skipped":559,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:33.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 5 21:17:34.067: INFO: Waiting up to 5m0s for pod "pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c" in namespace "emptydir-1182" to be "success or failure" May 5 21:17:34.085: INFO: Pod "pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.137421ms May 5 21:17:36.089: INFO: Pod "pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022409844s May 5 21:17:38.093: INFO: Pod "pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026137041s STEP: Saw pod success May 5 21:17:38.093: INFO: Pod "pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c" satisfied condition "success or failure" May 5 21:17:38.095: INFO: Trying to get logs from node jerma-worker pod pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c container test-container: STEP: delete the pod May 5 21:17:38.115: INFO: Waiting for pod pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c to disappear May 5 21:17:38.119: INFO: Pod pod-6affad5d-bef1-4d59-9476-f3b87fcbdf8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:17:38.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1182" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":567,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:17:38.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3540 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 21:17:38.302: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 21:18:04.449: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.150 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3540 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:18:04.449: INFO: >>> kubeConfig: /root/.kube/config I0505 21:18:04.477533 7 log.go:172] (0xc0043c8b00) (0xc0011f4c80) Create stream I0505 21:18:04.477565 7 log.go:172] (0xc0043c8b00) (0xc0011f4c80) Stream added, broadcasting: 1 I0505 21:18:04.479839 7 log.go:172] (0xc0043c8b00) Reply frame received for 1 I0505 21:18:04.479878 7 log.go:172] (0xc0043c8b00) (0xc0011f4f00) Create stream I0505 21:18:04.479892 7 log.go:172] (0xc0043c8b00) (0xc0011f4f00) Stream added, broadcasting: 3 I0505 21:18:04.480906 7 log.go:172] (0xc0043c8b00) Reply frame received for 3 I0505 21:18:04.480956 7 log.go:172] (0xc0043c8b00) (0xc001a0d400) Create stream I0505 21:18:04.480975 7 log.go:172] (0xc0043c8b00) (0xc001a0d400) Stream added, broadcasting: 5 I0505 21:18:04.482168 7 log.go:172] (0xc0043c8b00) Reply frame received for 5 I0505 21:18:05.590106 7 log.go:172] (0xc0043c8b00) Data frame received for 3 I0505 21:18:05.590137 7 log.go:172] (0xc0011f4f00) (3) Data frame handling I0505 21:18:05.590153 7 log.go:172] (0xc0011f4f00) (3) Data frame sent I0505 21:18:05.591995 7 log.go:172] (0xc0043c8b00) Data frame received for 3 I0505 21:18:05.592046 7 log.go:172] (0xc0011f4f00) (3) Data frame handling I0505 21:18:05.592080 7 log.go:172] (0xc0043c8b00) Data frame received for 5 I0505 21:18:05.592100 7 log.go:172] (0xc001a0d400) (5) Data frame handling I0505 21:18:05.593606 7 log.go:172] (0xc0043c8b00) Data frame received for 1 I0505 21:18:05.593686 7 log.go:172] (0xc0011f4c80) (1) Data frame handling I0505 21:18:05.593720 7 log.go:172] (0xc0011f4c80) (1) Data frame sent I0505 21:18:05.593739 7 log.go:172] (0xc0043c8b00) (0xc0011f4c80) Stream removed, broadcasting: 1 I0505 21:18:05.593753 7 log.go:172] (0xc0043c8b00) Go away received I0505 21:18:05.594208 7 log.go:172] (0xc0043c8b00) (0xc0011f4c80) Stream removed, broadcasting: 1 I0505 21:18:05.594241 7 log.go:172] (0xc0043c8b00) (0xc0011f4f00) Stream removed, broadcasting: 3 I0505 21:18:05.594258 7 log.go:172] (0xc0043c8b00) (0xc001a0d400) Stream removed, broadcasting: 5 May 5 21:18:05.594: INFO: Found all expected endpoints: [netserver-0] May 5 21:18:05.598: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.89 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3540 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:18:05.599: INFO: >>> kubeConfig: /root/.kube/config I0505 21:18:05.626050 7 log.go:172] (0xc002081a20) (0xc0014adae0) Create stream I0505 21:18:05.626083 7 log.go:172] (0xc002081a20) (0xc0014adae0) Stream added, broadcasting: 1 I0505 21:18:05.629998 7 log.go:172] (0xc002081a20) Reply frame received for 1 I0505 21:18:05.630050 7 log.go:172] (0xc002081a20) (0xc0013dda40) Create stream I0505 21:18:05.630062 7 log.go:172] (0xc002081a20) (0xc0013dda40) Stream added, broadcasting: 3 I0505 21:18:05.635369 7 log.go:172] (0xc002081a20) Reply frame received for 3 I0505 21:18:05.635398 7 log.go:172] (0xc002081a20) (0xc001a0d4a0) Create stream I0505 21:18:05.635406 7 log.go:172] (0xc002081a20) (0xc001a0d4a0) Stream added, broadcasting: 5 I0505 21:18:05.636292 7 log.go:172] (0xc002081a20) Reply frame received for 5 I0505 21:18:06.693102 7 log.go:172] (0xc002081a20) Data frame received for 3 I0505 21:18:06.693397 7 log.go:172] (0xc0013dda40) (3) Data frame handling I0505 21:18:06.693444 7 log.go:172] (0xc0013dda40) (3) Data frame sent I0505 21:18:06.693465 7 log.go:172] (0xc002081a20) Data frame received for 3 I0505 21:18:06.693495 7 log.go:172] (0xc0013dda40) (3) Data frame handling I0505 21:18:06.693658 7 log.go:172] (0xc002081a20) Data frame received for 5 I0505 21:18:06.693691 7 log.go:172] (0xc001a0d4a0) (5) Data frame handling I0505 21:18:06.695802 7 log.go:172] (0xc002081a20) Data frame received for 1 I0505 21:18:06.695827 7 log.go:172] (0xc0014adae0) (1) Data frame handling I0505 21:18:06.695862 7 log.go:172] (0xc0014adae0) (1) Data frame sent I0505 21:18:06.695896 7 log.go:172] (0xc002081a20) (0xc0014adae0) Stream removed, broadcasting: 1 I0505 21:18:06.695921 7 log.go:172] (0xc002081a20) Go away received I0505 21:18:06.696054 7 log.go:172] (0xc002081a20) (0xc0014adae0) Stream removed, broadcasting: 1 I0505 21:18:06.696104 7 log.go:172] (0xc002081a20) (0xc0013dda40) Stream removed, broadcasting: 3 I0505 21:18:06.696140 7 log.go:172] (0xc002081a20) (0xc001a0d4a0) Stream removed, broadcasting: 5 May 5 21:18:06.696: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:06.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3540" for this suite. • [SLOW TEST:28.580 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:06.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 5 21:18:06.839: INFO: Created pod &Pod{ObjectMeta:{dns-4748 dns-4748 /api/v1/namespaces/dns-4748/pods/dns-4748 e259f665-189d-48d0-a25e-8d84dde7a72b 13674570 0 2020-05-05 21:18:06 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s6mhk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s6mhk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s6mhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 5 21:18:10.846: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4748 PodName:dns-4748 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:18:10.846: INFO: >>> kubeConfig: /root/.kube/config I0505 21:18:10.876436 7 log.go:172] (0xc0043c91e0) (0xc0011f5ea0) Create stream I0505 21:18:10.876470 7 log.go:172] (0xc0043c91e0) (0xc0011f5ea0) Stream added, broadcasting: 1 I0505 21:18:10.880090 7 log.go:172] (0xc0043c91e0) Reply frame received for 1 I0505 21:18:10.880140 7 log.go:172] (0xc0043c91e0) (0xc0014adb80) Create stream I0505 21:18:10.880156 7 log.go:172] (0xc0043c91e0) (0xc0014adb80) Stream added, broadcasting: 3 I0505 21:18:10.881707 7 log.go:172] (0xc0043c91e0) Reply frame received for 3 I0505 21:18:10.881750 7 log.go:172] (0xc0043c91e0) (0xc0014add60) Create stream I0505 21:18:10.881771 7 log.go:172] (0xc0043c91e0) (0xc0014add60) Stream added, broadcasting: 5 I0505 21:18:10.882931 7 log.go:172] (0xc0043c91e0) Reply frame received for 5 I0505 21:18:10.970830 7 log.go:172] (0xc0043c91e0) Data frame received for 3 I0505 21:18:10.970866 7 log.go:172] (0xc0014adb80) (3) Data frame handling I0505 21:18:10.970892 7 log.go:172] (0xc0014adb80) (3) Data frame sent I0505 21:18:10.971586 7 log.go:172] (0xc0043c91e0) Data frame received for 5 I0505 21:18:10.971616 7 log.go:172] (0xc0014add60) (5) Data frame handling I0505 21:18:10.971689 7 log.go:172] (0xc0043c91e0) Data frame received for 3 I0505 21:18:10.971707 7 log.go:172] (0xc0014adb80) (3) Data frame handling I0505 21:18:10.973397 7 log.go:172] (0xc0043c91e0) Data frame received for 1 I0505 21:18:10.973420 7 log.go:172] (0xc0011f5ea0) (1) Data frame handling I0505 21:18:10.973443 7 log.go:172] (0xc0011f5ea0) (1) Data frame sent I0505 21:18:10.973549 7 log.go:172] (0xc0043c91e0) (0xc0011f5ea0) Stream removed, broadcasting: 1 I0505 21:18:10.973637 7 log.go:172] (0xc0043c91e0) (0xc0011f5ea0) Stream removed, broadcasting: 1 I0505 21:18:10.973653 7 log.go:172] (0xc0043c91e0) (0xc0014adb80) Stream removed, broadcasting: 3 I0505 21:18:10.973719 7 log.go:172] (0xc0043c91e0) Go away received I0505 21:18:10.973847 7 log.go:172] (0xc0043c91e0) (0xc0014add60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 5 21:18:10.973: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4748 PodName:dns-4748 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:18:10.973: INFO: >>> kubeConfig: /root/.kube/config I0505 21:18:11.004750 7 log.go:172] (0xc0043c9810) (0xc00104e780) Create stream I0505 21:18:11.004774 7 log.go:172] (0xc0043c9810) (0xc00104e780) Stream added, broadcasting: 1 I0505 21:18:11.007757 7 log.go:172] (0xc0043c9810) Reply frame received for 1 I0505 21:18:11.007808 7 log.go:172] (0xc0043c9810) (0xc000cfa000) Create stream I0505 21:18:11.007825 7 log.go:172] (0xc0043c9810) (0xc000cfa000) Stream added, broadcasting: 3 I0505 21:18:11.008764 7 log.go:172] (0xc0043c9810) Reply frame received for 3 I0505 21:18:11.008814 7 log.go:172] (0xc0043c9810) (0xc000cfa280) Create stream I0505 21:18:11.008830 7 log.go:172] (0xc0043c9810) (0xc000cfa280) Stream added, broadcasting: 5 I0505 21:18:11.009849 7 log.go:172] (0xc0043c9810) Reply frame received for 5 I0505 21:18:11.098443 7 log.go:172] (0xc0043c9810) Data frame received for 3 I0505 21:18:11.098481 7 log.go:172] (0xc000cfa000) (3) Data frame handling I0505 21:18:11.098498 7 log.go:172] (0xc000cfa000) (3) Data frame sent I0505 21:18:11.099580 7 log.go:172] (0xc0043c9810) Data frame received for 5 I0505 21:18:11.099609 7 log.go:172] (0xc000cfa280) (5) Data frame handling I0505 21:18:11.099960 7 log.go:172] (0xc0043c9810) Data frame received for 3 I0505 21:18:11.099982 7 log.go:172] (0xc000cfa000) (3) Data frame handling I0505 21:18:11.102027 7 log.go:172] (0xc0043c9810) Data frame received for 1 I0505 21:18:11.102068 7 log.go:172] (0xc00104e780) (1) Data frame handling I0505 21:18:11.102087 7 log.go:172] (0xc00104e780) (1) Data frame sent I0505 21:18:11.102190 7 log.go:172] (0xc0043c9810) (0xc00104e780) Stream removed, broadcasting: 1 I0505 21:18:11.102322 7 log.go:172] (0xc0043c9810) Go away received I0505 21:18:11.102389 7 log.go:172] (0xc0043c9810) (0xc00104e780) Stream removed, broadcasting: 1 I0505 21:18:11.102414 7 log.go:172] (0xc0043c9810) (0xc000cfa000) Stream removed, broadcasting: 3 I0505 21:18:11.102426 7 log.go:172] (0xc0043c9810) (0xc000cfa280) Stream removed, broadcasting: 5 May 5 21:18:11.102: INFO: Deleting pod dns-4748... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:11.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4748" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":49,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:11.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:17.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8519" for this suite. • [SLOW TEST:6.500 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":622,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:17.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1158 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1158 STEP: Creating statefulset with conflicting port in namespace statefulset-1158 STEP: Waiting until pod test-pod will start running in namespace statefulset-1158 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1158 May 5 21:18:23.829: INFO: Observed stateful pod in namespace: statefulset-1158, name: ss-0, uid: 728e2d1c-bcf1-4c53-9619-4daf2387cd0a, status phase: Pending. Waiting for statefulset controller to delete. May 5 21:18:24.348: INFO: Observed stateful pod in namespace: statefulset-1158, name: ss-0, uid: 728e2d1c-bcf1-4c53-9619-4daf2387cd0a, status phase: Failed. Waiting for statefulset controller to delete. May 5 21:18:24.393: INFO: Observed stateful pod in namespace: statefulset-1158, name: ss-0, uid: 728e2d1c-bcf1-4c53-9619-4daf2387cd0a, status phase: Failed. Waiting for statefulset controller to delete. May 5 21:18:24.403: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1158 STEP: Removing pod with conflicting port in namespace statefulset-1158 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1158 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 21:18:30.508: INFO: Deleting all statefulset in ns statefulset-1158 May 5 21:18:30.511: INFO: Scaling statefulset ss to 0 May 5 21:18:40.581: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:18:40.584: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:40.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1158" for this suite. • [SLOW TEST:22.962 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":51,"skipped":629,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:40.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0505 21:18:50.736333 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 21:18:50.736: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:50.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7674" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":52,"skipped":664,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:50.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:18:50.858: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.330133ms) May 5 21:18:50.862: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.558671ms) May 5 21:18:50.889: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 27.358119ms) May 5 21:18:50.894: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.324968ms) May 5 21:18:50.897: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.710615ms) May 5 21:18:50.901: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.180639ms) May 5 21:18:50.904: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.629424ms) May 5 21:18:50.908: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.74973ms) May 5 21:18:50.911: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.166113ms) May 5 21:18:50.915: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.329699ms) May 5 21:18:50.918: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.20313ms) May 5 21:18:50.921: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.328927ms) May 5 21:18:50.925: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.575619ms) May 5 21:18:50.928: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.472944ms) May 5 21:18:50.932: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.635018ms) May 5 21:18:50.936: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.584004ms) May 5 21:18:50.940: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.177656ms) May 5 21:18:50.944: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.756959ms) May 5 21:18:50.948: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.807928ms) May 5 21:18:50.950: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.838117ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:18:50.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4891" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":53,"skipped":670,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:18:50.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:22.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9711" for this suite. STEP: Destroying namespace "nsdeletetest-3435" for this suite. May 5 21:19:22.192: INFO: Namespace nsdeletetest-3435 was already deleted STEP: Destroying namespace "nsdeletetest-3147" for this suite. • [SLOW TEST:31.239 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":54,"skipped":681,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:22.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-243a2af2-37ee-4ab5-ab1c-78a6a12d1d95 STEP: Creating a pod to test consume configMaps May 5 21:19:22.279: INFO: Waiting up to 5m0s for pod "pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d" in namespace "configmap-1213" to be "success or failure" May 5 21:19:22.293: INFO: Pod "pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.452488ms May 5 21:19:24.298: INFO: Pod "pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018844977s May 5 21:19:26.301: INFO: Pod "pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022552636s STEP: Saw pod success May 5 21:19:26.301: INFO: Pod "pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d" satisfied condition "success or failure" May 5 21:19:26.304: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d container configmap-volume-test: STEP: delete the pod May 5 21:19:26.353: INFO: Waiting for pod pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d to disappear May 5 21:19:26.362: INFO: Pod pod-configmaps-827370ec-76b0-49b9-ad66-f5e6cb6b218d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1213" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:26.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:19:26.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1000' May 5 21:19:26.793: INFO: stderr: "" May 5 21:19:26.793: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 5 21:19:26.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1000' May 5 21:19:27.057: INFO: stderr: "" May 5 21:19:27.057: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 5 21:19:28.061: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:19:28.061: INFO: Found 0 / 1 May 5 21:19:29.061: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:19:29.062: INFO: Found 0 / 1 May 5 21:19:30.062: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:19:30.062: INFO: Found 1 / 1 May 5 21:19:30.062: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 5 21:19:30.065: INFO: Selector matched 1 pods for map[app:agnhost] May 5 21:19:30.065: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 5 21:19:30.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-74q9n --namespace=kubectl-1000' May 5 21:19:30.184: INFO: stderr: "" May 5 21:19:30.184: INFO: stdout: "Name: agnhost-master-74q9n\nNamespace: kubectl-1000\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Tue, 05 May 2020 21:19:26 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.155\nIPs:\n IP: 10.244.1.155\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d35fb0d4e6cd0c6f20c58cc326d883d92c322e048661fcd530902e890fe98734\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 05 May 2020 21:19:29 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-ghcqv (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-ghcqv:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-ghcqv\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-1000/agnhost-master-74q9n to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 5 21:19:30.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1000' May 5 21:19:30.300: INFO: stderr: "" May 5 21:19:30.300: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1000\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-74q9n\n" May 5 21:19:30.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1000' May 5 21:19:30.414: INFO: stderr: "" May 5 21:19:30.414: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1000\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.105.50.125\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.155:6379\nSession Affinity: None\nEvents: \n" May 5 21:19:30.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 5 21:19:30.548: INFO: stderr: "" May 5 21:19:30.548: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 05 May 2020 21:19:27 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 05 May 2020 21:14:53 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 05 May 2020 21:14:53 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 05 May 2020 21:14:53 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 05 May 2020 21:14:53 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 51d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 51d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 51d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 51d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 51d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 5 21:19:30.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1000' May 5 21:19:30.656: INFO: stderr: "" May 5 21:19:30.656: INFO: stdout: "Name: kubectl-1000\nLabels: e2e-framework=kubectl\n e2e-run=588cbb0a-04d2-456c-a1b1-86bdc850e3a5\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:30.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1000" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":56,"skipped":714,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:30.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-ca3a9f4c-7331-4e8b-9384-15da00b55187 STEP: Creating a pod to test consume secrets May 5 21:19:30.842: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7" in namespace "projected-9238" to be "success or failure" May 5 21:19:30.846: INFO: Pod "pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.944449ms May 5 21:19:32.931: INFO: Pod "pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08952824s May 5 21:19:34.935: INFO: Pod "pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093795904s STEP: Saw pod success May 5 21:19:34.936: INFO: Pod "pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7" satisfied condition "success or failure" May 5 21:19:34.939: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7 container projected-secret-volume-test: STEP: delete the pod May 5 21:19:34.974: INFO: Waiting for pod pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7 to disappear May 5 21:19:34.984: INFO: Pod pod-projected-secrets-e1ed6acf-de45-4fc9-bb0a-053a01f159b7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:34.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9238" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":746,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:34.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:35.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2214" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":752,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:35.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-2184 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2184 to expose endpoints map[] May 5 21:19:35.686: INFO: Get endpoints failed (35.618098ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 5 21:19:36.689: INFO: successfully validated that service multi-endpoint-test in namespace services-2184 exposes endpoints map[] (1.038905976s elapsed) STEP: Creating pod pod1 in namespace services-2184 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2184 to expose endpoints map[pod1:[100]] May 5 21:19:40.074: INFO: successfully validated that service multi-endpoint-test in namespace services-2184 exposes endpoints map[pod1:[100]] (3.378387335s elapsed) STEP: Creating pod pod2 in namespace services-2184 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2184 to expose endpoints map[pod1:[100] pod2:[101]] May 5 21:19:43.167: INFO: successfully validated that service multi-endpoint-test in namespace services-2184 exposes endpoints map[pod1:[100] pod2:[101]] (3.079094623s elapsed) STEP: Deleting pod pod1 in namespace services-2184 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2184 to expose endpoints map[pod2:[101]] May 5 21:19:44.215: INFO: successfully validated that service multi-endpoint-test in namespace services-2184 exposes endpoints map[pod2:[101]] (1.043351892s elapsed) STEP: Deleting pod pod2 in namespace services-2184 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2184 to expose endpoints map[] May 5 21:19:45.271: INFO: successfully validated that service multi-endpoint-test in namespace services-2184 exposes endpoints map[] (1.051110162s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:45.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2184" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.951 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":59,"skipped":769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:45.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5f53a0e0-1aae-4fc7-ac76-411c59dfae8c STEP: Creating a pod to test consume configMaps May 5 21:19:45.406: INFO: Waiting up to 5m0s for pod "pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823" in namespace "configmap-8101" to be "success or failure" May 5 21:19:45.410: INFO: Pod "pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172642ms May 5 21:19:47.417: INFO: Pod "pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010690372s May 5 21:19:49.421: INFO: Pod "pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014744383s STEP: Saw pod success May 5 21:19:49.421: INFO: Pod "pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823" satisfied condition "success or failure" May 5 21:19:49.423: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823 container configmap-volume-test: STEP: delete the pod May 5 21:19:49.454: INFO: Waiting for pod pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823 to disappear May 5 21:19:49.475: INFO: Pod pod-configmaps-15387917-cdde-4b19-9b94-47dc1cb9a823 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:49.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8101" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":793,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:49.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 21:19:49.561: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:54.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7878" for this suite. • [SLOW TEST:5.292 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":61,"skipped":807,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:54.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-05029882-852c-484d-a5a5-56637d8a88fb STEP: Creating a pod to test consume configMaps May 5 21:19:54.876: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838" in namespace "projected-8704" to be "success or failure" May 5 21:19:54.896: INFO: Pod "pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838": Phase="Pending", Reason="", readiness=false. Elapsed: 19.753299ms May 5 21:19:56.900: INFO: Pod "pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023524393s May 5 21:19:58.904: INFO: Pod "pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027178573s STEP: Saw pod success May 5 21:19:58.904: INFO: Pod "pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838" satisfied condition "success or failure" May 5 21:19:58.961: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838 container projected-configmap-volume-test: STEP: delete the pod May 5 21:19:59.010: INFO: Waiting for pod pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838 to disappear May 5 21:19:59.035: INFO: Pod pod-projected-configmaps-dff84798-e35a-4daa-a0df-af63566b0838 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:19:59.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8704" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:19:59.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:20:15.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2647" for this suite. • [SLOW TEST:16.147 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":63,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:20:15.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 5 21:20:15.325: INFO: Waiting up to 5m0s for pod "pod-833c3944-6d00-4094-af79-b0a7b4621349" in namespace "emptydir-781" to be "success or failure" May 5 21:20:15.328: INFO: Pod "pod-833c3944-6d00-4094-af79-b0a7b4621349": Phase="Pending", Reason="", readiness=false. Elapsed: 3.216449ms May 5 21:20:17.333: INFO: Pod "pod-833c3944-6d00-4094-af79-b0a7b4621349": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008067124s May 5 21:20:19.337: INFO: Pod "pod-833c3944-6d00-4094-af79-b0a7b4621349": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012475291s STEP: Saw pod success May 5 21:20:19.337: INFO: Pod "pod-833c3944-6d00-4094-af79-b0a7b4621349" satisfied condition "success or failure" May 5 21:20:19.340: INFO: Trying to get logs from node jerma-worker pod pod-833c3944-6d00-4094-af79-b0a7b4621349 container test-container: STEP: delete the pod May 5 21:20:19.413: INFO: Waiting for pod pod-833c3944-6d00-4094-af79-b0a7b4621349 to disappear May 5 21:20:19.418: INFO: Pod pod-833c3944-6d00-4094-af79-b0a7b4621349 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:20:19.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-781" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":891,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:20:19.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3510 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3510 May 5 21:20:19.532: INFO: Found 0 stateful pods, waiting for 1 May 5 21:20:29.536: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 21:20:29.588: INFO: Deleting all statefulset in ns statefulset-3510 May 5 21:20:29.627: INFO: Scaling statefulset ss to 0 May 5 21:20:49.668: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:20:49.671: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:20:49.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3510" for this suite. • [SLOW TEST:30.265 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":65,"skipped":920,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:20:49.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0505 21:20:50.919648 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 21:20:50.919: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:20:50.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-359" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":66,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:20:50.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:20:51.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6149' May 5 21:20:51.170: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 21:20:51.170: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 5 21:20:51.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6149' May 5 21:20:51.298: INFO: stderr: "" May 5 21:20:51.298: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:20:51.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6149" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":67,"skipped":950,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:20:51.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-4jkm STEP: Creating a pod to test atomic-volume-subpath May 5 21:20:52.386: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4jkm" in namespace "subpath-8277" to be "success or failure" May 5 21:20:52.455: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 68.77022ms May 5 21:20:54.473: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087549374s May 5 21:20:56.476: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 4.090173659s May 5 21:20:58.480: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 6.094221628s May 5 21:21:00.484: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.098220319s May 5 21:21:02.543: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.157533525s May 5 21:21:04.555: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.169486104s May 5 21:21:06.559: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.173499269s May 5 21:21:08.564: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.177894213s May 5 21:21:10.568: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.181987136s May 5 21:21:12.572: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.18639833s May 5 21:21:14.577: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.19128976s May 5 21:21:16.581: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Running", Reason="", readiness=true. Elapsed: 24.195607856s May 5 21:21:18.585: INFO: Pod "pod-subpath-test-configmap-4jkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.199316755s STEP: Saw pod success May 5 21:21:18.585: INFO: Pod "pod-subpath-test-configmap-4jkm" satisfied condition "success or failure" May 5 21:21:18.587: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-4jkm container test-container-subpath-configmap-4jkm: STEP: delete the pod May 5 21:21:18.613: INFO: Waiting for pod pod-subpath-test-configmap-4jkm to disappear May 5 21:21:18.681: INFO: Pod pod-subpath-test-configmap-4jkm no longer exists STEP: Deleting pod pod-subpath-test-configmap-4jkm May 5 21:21:18.681: INFO: Deleting pod "pod-subpath-test-configmap-4jkm" in namespace "subpath-8277" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:21:18.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8277" for this suite. • [SLOW TEST:27.349 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":68,"skipped":969,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:21:18.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:21:18.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 5 21:21:19.423: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:19Z generation:1 name:name1 resourceVersion:13676007 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c48cebd8-442a-4e0b-8f96-135db6d5077a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 5 21:21:29.429: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:29Z generation:1 name:name2 resourceVersion:13676047 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:75f66281-6002-4c1d-8fb6-e1bf412b301b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 5 21:21:39.435: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:19Z generation:2 name:name1 resourceVersion:13676077 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c48cebd8-442a-4e0b-8f96-135db6d5077a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 5 21:21:49.441: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:29Z generation:2 name:name2 resourceVersion:13676107 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:75f66281-6002-4c1d-8fb6-e1bf412b301b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 5 21:21:59.460: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:19Z generation:2 name:name1 resourceVersion:13676135 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c48cebd8-442a-4e0b-8f96-135db6d5077a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 5 21:22:09.468: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-05T21:21:29Z generation:2 name:name2 resourceVersion:13676163 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:75f66281-6002-4c1d-8fb6-e1bf412b301b] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:22:19.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-8597" for this suite. • [SLOW TEST:61.297 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":69,"skipped":980,"failed":0} [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:22:19.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 5 21:22:20.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2578 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 5 21:22:23.569: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0505 21:22:23.499400 1620 log.go:172] (0xc000a1a840) (0xc00045b900) Create stream\nI0505 21:22:23.499464 1620 log.go:172] (0xc000a1a840) (0xc00045b900) Stream added, broadcasting: 1\nI0505 21:22:23.502571 1620 log.go:172] (0xc000a1a840) Reply frame received for 1\nI0505 21:22:23.502647 1620 log.go:172] (0xc000a1a840) (0xc000854000) Create stream\nI0505 21:22:23.502674 1620 log.go:172] (0xc000a1a840) (0xc000854000) Stream added, broadcasting: 3\nI0505 21:22:23.503803 1620 log.go:172] (0xc000a1a840) Reply frame received for 3\nI0505 21:22:23.503871 1620 log.go:172] (0xc000a1a840) (0xc00045b9a0) Create stream\nI0505 21:22:23.503895 1620 log.go:172] (0xc000a1a840) (0xc00045b9a0) Stream added, broadcasting: 5\nI0505 21:22:23.504926 1620 log.go:172] (0xc000a1a840) Reply frame received for 5\nI0505 21:22:23.504965 1620 log.go:172] (0xc000a1a840) (0xc0008540a0) Create stream\nI0505 21:22:23.504976 1620 log.go:172] (0xc000a1a840) (0xc0008540a0) Stream added, broadcasting: 7\nI0505 21:22:23.506131 1620 log.go:172] (0xc000a1a840) Reply frame received for 7\nI0505 21:22:23.506267 1620 log.go:172] (0xc000854000) (3) Writing data frame\nI0505 21:22:23.506392 1620 log.go:172] (0xc000854000) (3) Writing data frame\nI0505 21:22:23.507350 1620 log.go:172] (0xc000a1a840) Data frame received for 5\nI0505 21:22:23.507379 1620 log.go:172] (0xc00045b9a0) (5) Data frame handling\nI0505 21:22:23.507430 1620 log.go:172] (0xc00045b9a0) (5) Data frame sent\nI0505 21:22:23.508019 1620 log.go:172] (0xc000a1a840) Data frame received for 5\nI0505 21:22:23.508040 1620 log.go:172] (0xc00045b9a0) (5) Data frame handling\nI0505 21:22:23.508059 1620 log.go:172] (0xc00045b9a0) (5) Data frame sent\nI0505 21:22:23.548225 1620 log.go:172] (0xc000a1a840) Data frame received for 5\nI0505 21:22:23.548272 1620 log.go:172] (0xc00045b9a0) (5) Data frame handling\nI0505 21:22:23.548311 1620 log.go:172] (0xc000a1a840) Data frame received for 7\nI0505 21:22:23.548403 1620 log.go:172] (0xc0008540a0) (7) Data frame handling\nI0505 21:22:23.548560 1620 log.go:172] (0xc000a1a840) Data frame received for 1\nI0505 21:22:23.548609 1620 log.go:172] (0xc00045b900) (1) Data frame handling\nI0505 21:22:23.548818 1620 log.go:172] (0xc00045b900) (1) Data frame sent\nI0505 21:22:23.548966 1620 log.go:172] (0xc000a1a840) (0xc00045b900) Stream removed, broadcasting: 1\nI0505 21:22:23.549259 1620 log.go:172] (0xc000a1a840) (0xc000854000) Stream removed, broadcasting: 3\nI0505 21:22:23.549406 1620 log.go:172] (0xc000a1a840) Go away received\nI0505 21:22:23.549704 1620 log.go:172] (0xc000a1a840) (0xc00045b900) Stream removed, broadcasting: 1\nI0505 21:22:23.549751 1620 log.go:172] (0xc000a1a840) (0xc000854000) Stream removed, broadcasting: 3\nI0505 21:22:23.549771 1620 log.go:172] (0xc000a1a840) (0xc00045b9a0) Stream removed, broadcasting: 5\nI0505 21:22:23.549804 1620 log.go:172] (0xc000a1a840) (0xc0008540a0) Stream removed, broadcasting: 7\n" May 5 21:22:23.569: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:22:25.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2578" for this suite. • [SLOW TEST:5.618 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":70,"skipped":980,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:22:25.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 5 21:22:25.670: INFO: Waiting up to 5m0s for pod "pod-a961f85c-fc07-4309-a502-a4ccf0f36d84" in namespace "emptydir-4655" to be "success or failure" May 5 21:22:25.684: INFO: Pod "pod-a961f85c-fc07-4309-a502-a4ccf0f36d84": Phase="Pending", Reason="", readiness=false. Elapsed: 14.248356ms May 5 21:22:27.688: INFO: Pod "pod-a961f85c-fc07-4309-a502-a4ccf0f36d84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018107949s May 5 21:22:29.693: INFO: Pod "pod-a961f85c-fc07-4309-a502-a4ccf0f36d84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022451778s STEP: Saw pod success May 5 21:22:29.693: INFO: Pod "pod-a961f85c-fc07-4309-a502-a4ccf0f36d84" satisfied condition "success or failure" May 5 21:22:29.696: INFO: Trying to get logs from node jerma-worker2 pod pod-a961f85c-fc07-4309-a502-a4ccf0f36d84 container test-container: STEP: delete the pod May 5 21:22:29.734: INFO: Waiting for pod pod-a961f85c-fc07-4309-a502-a4ccf0f36d84 to disappear May 5 21:22:29.738: INFO: Pod pod-a961f85c-fc07-4309-a502-a4ccf0f36d84 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:22:29.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4655" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":990,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:22:29.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:22:29.846: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:22:34.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8088" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":998,"failed":0} ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:22:34.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6625 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 5 21:22:34.116: INFO: Found 0 stateful pods, waiting for 3 May 5 21:22:44.121: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 21:22:44.121: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 21:22:44.121: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 5 21:22:44.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6625 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:22:44.405: INFO: stderr: "I0505 21:22:44.264481 1645 log.go:172] (0xc000630a50) (0xc000706000) Create stream\nI0505 21:22:44.264535 1645 log.go:172] (0xc000630a50) (0xc000706000) Stream added, broadcasting: 1\nI0505 21:22:44.273886 1645 log.go:172] (0xc000630a50) Reply frame received for 1\nI0505 21:22:44.273928 1645 log.go:172] (0xc000630a50) (0xc0005eba40) Create stream\nI0505 21:22:44.273940 1645 log.go:172] (0xc000630a50) (0xc0005eba40) Stream added, broadcasting: 3\nI0505 21:22:44.275034 1645 log.go:172] (0xc000630a50) Reply frame received for 3\nI0505 21:22:44.275080 1645 log.go:172] (0xc000630a50) (0xc0007060a0) Create stream\nI0505 21:22:44.275087 1645 log.go:172] (0xc000630a50) (0xc0007060a0) Stream added, broadcasting: 5\nI0505 21:22:44.277289 1645 log.go:172] (0xc000630a50) Reply frame received for 5\nI0505 21:22:44.362542 1645 log.go:172] (0xc000630a50) Data frame received for 5\nI0505 21:22:44.362569 1645 log.go:172] (0xc0007060a0) (5) Data frame handling\nI0505 21:22:44.362585 1645 log.go:172] (0xc0007060a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:22:44.395805 1645 log.go:172] (0xc000630a50) Data frame received for 3\nI0505 21:22:44.395853 1645 log.go:172] (0xc0005eba40) (3) Data frame handling\nI0505 21:22:44.395887 1645 log.go:172] (0xc0005eba40) (3) Data frame sent\nI0505 21:22:44.395924 1645 log.go:172] (0xc000630a50) Data frame received for 3\nI0505 21:22:44.395953 1645 log.go:172] (0xc0005eba40) (3) Data frame handling\nI0505 21:22:44.396178 1645 log.go:172] (0xc000630a50) Data frame received for 5\nI0505 21:22:44.396208 1645 log.go:172] (0xc0007060a0) (5) Data frame handling\nI0505 21:22:44.398544 1645 log.go:172] (0xc000630a50) Data frame received for 1\nI0505 21:22:44.398582 1645 log.go:172] (0xc000706000) (1) Data frame handling\nI0505 21:22:44.398634 1645 log.go:172] (0xc000706000) (1) Data frame sent\nI0505 21:22:44.398839 1645 log.go:172] (0xc000630a50) (0xc000706000) Stream removed, broadcasting: 1\nI0505 21:22:44.399182 1645 log.go:172] (0xc000630a50) Go away received\nI0505 21:22:44.399345 1645 log.go:172] (0xc000630a50) (0xc000706000) Stream removed, broadcasting: 1\nI0505 21:22:44.399372 1645 log.go:172] (0xc000630a50) (0xc0005eba40) Stream removed, broadcasting: 3\nI0505 21:22:44.399384 1645 log.go:172] (0xc000630a50) (0xc0007060a0) Stream removed, broadcasting: 5\n" May 5 21:22:44.405: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:22:44.405: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 5 21:22:44.442: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 5 21:22:54.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6625 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:22:54.715: INFO: stderr: "I0505 21:22:54.626087 1667 log.go:172] (0xc0005d6dc0) (0xc00063dc20) Create stream\nI0505 21:22:54.626141 1667 log.go:172] (0xc0005d6dc0) (0xc00063dc20) Stream added, broadcasting: 1\nI0505 21:22:54.628841 1667 log.go:172] (0xc0005d6dc0) Reply frame received for 1\nI0505 21:22:54.628885 1667 log.go:172] (0xc0005d6dc0) (0xc000914000) Create stream\nI0505 21:22:54.628899 1667 log.go:172] (0xc0005d6dc0) (0xc000914000) Stream added, broadcasting: 3\nI0505 21:22:54.630201 1667 log.go:172] (0xc0005d6dc0) Reply frame received for 3\nI0505 21:22:54.630232 1667 log.go:172] (0xc0005d6dc0) (0xc0009140a0) Create stream\nI0505 21:22:54.630240 1667 log.go:172] (0xc0005d6dc0) (0xc0009140a0) Stream added, broadcasting: 5\nI0505 21:22:54.631434 1667 log.go:172] (0xc0005d6dc0) Reply frame received for 5\nI0505 21:22:54.707328 1667 log.go:172] (0xc0005d6dc0) Data frame received for 5\nI0505 21:22:54.707372 1667 log.go:172] (0xc0009140a0) (5) Data frame handling\nI0505 21:22:54.707387 1667 log.go:172] (0xc0009140a0) (5) Data frame sent\nI0505 21:22:54.707399 1667 log.go:172] (0xc0005d6dc0) Data frame received for 5\nI0505 21:22:54.707409 1667 log.go:172] (0xc0009140a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:22:54.707458 1667 log.go:172] (0xc0005d6dc0) Data frame received for 3\nI0505 21:22:54.707484 1667 log.go:172] (0xc000914000) (3) Data frame handling\nI0505 21:22:54.707496 1667 log.go:172] (0xc000914000) (3) Data frame sent\nI0505 21:22:54.707509 1667 log.go:172] (0xc0005d6dc0) Data frame received for 3\nI0505 21:22:54.707518 1667 log.go:172] (0xc000914000) (3) Data frame handling\nI0505 21:22:54.710152 1667 log.go:172] (0xc0005d6dc0) Data frame received for 1\nI0505 21:22:54.710164 1667 log.go:172] (0xc00063dc20) (1) Data frame handling\nI0505 21:22:54.710180 1667 log.go:172] (0xc00063dc20) (1) Data frame sent\nI0505 21:22:54.710193 1667 log.go:172] (0xc0005d6dc0) (0xc00063dc20) Stream removed, broadcasting: 1\nI0505 21:22:54.710329 1667 log.go:172] (0xc0005d6dc0) Go away received\nI0505 21:22:54.710453 1667 log.go:172] (0xc0005d6dc0) (0xc00063dc20) Stream removed, broadcasting: 1\nI0505 21:22:54.710470 1667 log.go:172] (0xc0005d6dc0) (0xc000914000) Stream removed, broadcasting: 3\nI0505 21:22:54.710480 1667 log.go:172] (0xc0005d6dc0) (0xc0009140a0) Stream removed, broadcasting: 5\n" May 5 21:22:54.715: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:22:54.715: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:23:04.737: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:23:04.737: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:23:04.737: INFO: Waiting for Pod statefulset-6625/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:23:04.737: INFO: Waiting for Pod statefulset-6625/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:23:14.746: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:23:14.746: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:23:14.746: INFO: Waiting for Pod statefulset-6625/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:23:24.744: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:23:24.744: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 5 21:23:34.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6625 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:23:35.001: INFO: stderr: "I0505 21:23:34.877850 1690 log.go:172] (0xc00090ce70) (0xc0009523c0) Create stream\nI0505 21:23:34.877908 1690 log.go:172] (0xc00090ce70) (0xc0009523c0) Stream added, broadcasting: 1\nI0505 21:23:34.881729 1690 log.go:172] (0xc00090ce70) Reply frame received for 1\nI0505 21:23:34.881760 1690 log.go:172] (0xc00090ce70) (0xc0005365a0) Create stream\nI0505 21:23:34.881768 1690 log.go:172] (0xc00090ce70) (0xc0005365a0) Stream added, broadcasting: 3\nI0505 21:23:34.882728 1690 log.go:172] (0xc00090ce70) Reply frame received for 3\nI0505 21:23:34.882755 1690 log.go:172] (0xc00090ce70) (0xc00003a460) Create stream\nI0505 21:23:34.882765 1690 log.go:172] (0xc00090ce70) (0xc00003a460) Stream added, broadcasting: 5\nI0505 21:23:34.883478 1690 log.go:172] (0xc00090ce70) Reply frame received for 5\nI0505 21:23:34.958053 1690 log.go:172] (0xc00090ce70) Data frame received for 5\nI0505 21:23:34.958073 1690 log.go:172] (0xc00003a460) (5) Data frame handling\nI0505 21:23:34.958085 1690 log.go:172] (0xc00003a460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:23:34.992366 1690 log.go:172] (0xc00090ce70) Data frame received for 3\nI0505 21:23:34.992410 1690 log.go:172] (0xc0005365a0) (3) Data frame handling\nI0505 21:23:34.992527 1690 log.go:172] (0xc0005365a0) (3) Data frame sent\nI0505 21:23:34.993000 1690 log.go:172] (0xc00090ce70) Data frame received for 5\nI0505 21:23:34.993033 1690 log.go:172] (0xc00003a460) (5) Data frame handling\nI0505 21:23:34.993099 1690 log.go:172] (0xc00090ce70) Data frame received for 3\nI0505 21:23:34.993314 1690 log.go:172] (0xc0005365a0) (3) Data frame handling\nI0505 21:23:34.995314 1690 log.go:172] (0xc00090ce70) Data frame received for 1\nI0505 21:23:34.995347 1690 log.go:172] (0xc0009523c0) (1) Data frame handling\nI0505 21:23:34.995380 1690 log.go:172] (0xc0009523c0) (1) Data frame sent\nI0505 21:23:34.995403 1690 log.go:172] (0xc00090ce70) (0xc0009523c0) Stream removed, broadcasting: 1\nI0505 21:23:34.995636 1690 log.go:172] (0xc00090ce70) Go away received\nI0505 21:23:34.995942 1690 log.go:172] (0xc00090ce70) (0xc0009523c0) Stream removed, broadcasting: 1\nI0505 21:23:34.995983 1690 log.go:172] (0xc00090ce70) (0xc0005365a0) Stream removed, broadcasting: 3\nI0505 21:23:34.996007 1690 log.go:172] (0xc00090ce70) (0xc00003a460) Stream removed, broadcasting: 5\n" May 5 21:23:35.002: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:23:35.002: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 21:23:45.035: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 5 21:23:55.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6625 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:23:55.321: INFO: stderr: "I0505 21:23:55.238583 1711 log.go:172] (0xc0008d8b00) (0xc0007a81e0) Create stream\nI0505 21:23:55.238641 1711 log.go:172] (0xc0008d8b00) (0xc0007a81e0) Stream added, broadcasting: 1\nI0505 21:23:55.240997 1711 log.go:172] (0xc0008d8b00) Reply frame received for 1\nI0505 21:23:55.241036 1711 log.go:172] (0xc0008d8b00) (0xc0008bc000) Create stream\nI0505 21:23:55.241053 1711 log.go:172] (0xc0008d8b00) (0xc0008bc000) Stream added, broadcasting: 3\nI0505 21:23:55.242216 1711 log.go:172] (0xc0008d8b00) Reply frame received for 3\nI0505 21:23:55.242261 1711 log.go:172] (0xc0008d8b00) (0xc000619ae0) Create stream\nI0505 21:23:55.242278 1711 log.go:172] (0xc0008d8b00) (0xc000619ae0) Stream added, broadcasting: 5\nI0505 21:23:55.243150 1711 log.go:172] (0xc0008d8b00) Reply frame received for 5\nI0505 21:23:55.313087 1711 log.go:172] (0xc0008d8b00) Data frame received for 3\nI0505 21:23:55.313324 1711 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0505 21:23:55.313344 1711 log.go:172] (0xc0008bc000) (3) Data frame sent\nI0505 21:23:55.313367 1711 log.go:172] (0xc0008d8b00) Data frame received for 5\nI0505 21:23:55.313376 1711 log.go:172] (0xc000619ae0) (5) Data frame handling\nI0505 21:23:55.313385 1711 log.go:172] (0xc000619ae0) (5) Data frame sent\nI0505 21:23:55.313395 1711 log.go:172] (0xc0008d8b00) Data frame received for 5\nI0505 21:23:55.313401 1711 log.go:172] (0xc000619ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:23:55.313466 1711 log.go:172] (0xc0008d8b00) Data frame received for 3\nI0505 21:23:55.313478 1711 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0505 21:23:55.315237 1711 log.go:172] (0xc0008d8b00) Data frame received for 1\nI0505 21:23:55.315248 1711 log.go:172] (0xc0007a81e0) (1) Data frame handling\nI0505 21:23:55.315253 1711 log.go:172] (0xc0007a81e0) (1) Data frame sent\nI0505 21:23:55.315397 1711 log.go:172] (0xc0008d8b00) (0xc0007a81e0) Stream removed, broadcasting: 1\nI0505 21:23:55.315477 1711 log.go:172] (0xc0008d8b00) Go away received\nI0505 21:23:55.315627 1711 log.go:172] (0xc0008d8b00) (0xc0007a81e0) Stream removed, broadcasting: 1\nI0505 21:23:55.315648 1711 log.go:172] (0xc0008d8b00) (0xc0008bc000) Stream removed, broadcasting: 3\nI0505 21:23:55.315658 1711 log.go:172] (0xc0008d8b00) (0xc000619ae0) Stream removed, broadcasting: 5\n" May 5 21:23:55.321: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:23:55.321: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:24:05.344: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:24:05.344: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 21:24:05.344: INFO: Waiting for Pod statefulset-6625/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 21:24:05.344: INFO: Waiting for Pod statefulset-6625/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 21:24:15.354: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:24:15.354: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 21:24:15.354: INFO: Waiting for Pod statefulset-6625/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 5 21:24:25.351: INFO: Waiting for StatefulSet statefulset-6625/ss2 to complete update May 5 21:24:25.351: INFO: Waiting for Pod statefulset-6625/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 21:24:35.351: INFO: Deleting all statefulset in ns statefulset-6625 May 5 21:24:35.354: INFO: Scaling statefulset ss2 to 0 May 5 21:25:05.369: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:25:05.371: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:05.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6625" for this suite. • [SLOW TEST:151.390 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":73,"skipped":998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:05.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 5 21:25:05.483: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 5 21:25:14.530: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:14.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2478" for this suite. • [SLOW TEST:9.141 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1042,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:14.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:25:14.602: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef" in namespace "projected-5526" to be "success or failure" May 5 21:25:14.606: INFO: Pod "downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.534815ms May 5 21:25:16.610: INFO: Pod "downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007623604s May 5 21:25:18.614: INFO: Pod "downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011748859s STEP: Saw pod success May 5 21:25:18.614: INFO: Pod "downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef" satisfied condition "success or failure" May 5 21:25:18.617: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef container client-container: STEP: delete the pod May 5 21:25:18.662: INFO: Waiting for pod downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef to disappear May 5 21:25:18.726: INFO: Pod downwardapi-volume-d33e0a15-d5d5-48dd-a643-cf2a85c2f4ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:18.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5526" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:18.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:25:19.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:25:21.162: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310719, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310719, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:25:24.243: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:36.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2684" for this suite. STEP: Destroying namespace "webhook-2684-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.854 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":76,"skipped":1068,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:36.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 5 21:25:40.726: INFO: Pod pod-hostip-4c943247-5375-41c7-acf4-e2386c58822e has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5917" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1083,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:40.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 5 21:25:40.838: INFO: Waiting up to 5m0s for pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f" in namespace "containers-9034" to be "success or failure" May 5 21:25:40.841: INFO: Pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.243462ms May 5 21:25:42.845: INFO: Pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007445276s May 5 21:25:45.012: INFO: Pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174151171s May 5 21:25:47.016: INFO: Pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.178311643s STEP: Saw pod success May 5 21:25:47.016: INFO: Pod "client-containers-c065ce5b-7222-4623-be9d-f5860550292f" satisfied condition "success or failure" May 5 21:25:47.020: INFO: Trying to get logs from node jerma-worker pod client-containers-c065ce5b-7222-4623-be9d-f5860550292f container test-container: STEP: delete the pod May 5 21:25:47.059: INFO: Waiting for pod client-containers-c065ce5b-7222-4623-be9d-f5860550292f to disappear May 5 21:25:47.075: INFO: Pod client-containers-c065ce5b-7222-4623-be9d-f5860550292f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:25:47.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9034" for this suite. • [SLOW TEST:6.349 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:25:47.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:03.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5405" for this suite. • [SLOW TEST:16.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":79,"skipped":1127,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:03.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 5 21:26:11.411: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 21:26:11.433: INFO: Pod pod-with-poststart-http-hook still exists May 5 21:26:13.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 21:26:13.438: INFO: Pod pod-with-poststart-http-hook still exists May 5 21:26:15.433: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 5 21:26:15.437: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:15.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8745" for this suite. • [SLOW TEST:12.201 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1142,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:15.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:31.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3631" for this suite. • [SLOW TEST:16.215 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":81,"skipped":1154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:31.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 5 21:26:31.714: INFO: >>> kubeConfig: /root/.kube/config May 5 21:26:34.722: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:45.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3466" for this suite. • [SLOW TEST:13.691 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":82,"skipped":1192,"failed":0} S ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:45.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-dbc5295b-622d-46ee-9123-695b45f9656b STEP: Creating secret with name secret-projected-all-test-volume-7ee7b8e1-8389-4473-a42f-23404a3cc036 STEP: Creating a pod to test Check all projections for projected volume plugin May 5 21:26:45.431: INFO: Waiting up to 5m0s for pod "projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c" in namespace "projected-2486" to be "success or failure" May 5 21:26:45.495: INFO: Pod "projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c": Phase="Pending", Reason="", readiness=false. Elapsed: 63.312579ms May 5 21:26:47.498: INFO: Pod "projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067148575s May 5 21:26:49.524: INFO: Pod "projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093213922s STEP: Saw pod success May 5 21:26:49.524: INFO: Pod "projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c" satisfied condition "success or failure" May 5 21:26:49.527: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c container projected-all-volume-test: STEP: delete the pod May 5 21:26:49.545: INFO: Waiting for pod projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c to disappear May 5 21:26:49.549: INFO: Pod projected-volume-c021dc35-c8f9-4d1a-bfef-fe79b7ed848c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:49.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2486" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:49.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8 May 5 21:26:49.690: INFO: Pod name my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8: Found 0 pods out of 1 May 5 21:26:54.694: INFO: Pod name my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8: Found 1 pods out of 1 May 5 21:26:54.694: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8" are running May 5 21:26:54.696: INFO: Pod "my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8-7kgzn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:26:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:26:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:26:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-05 21:26:49 +0000 UTC Reason: Message:}]) May 5 21:26:54.696: INFO: Trying to dial the pod May 5 21:26:59.705: INFO: Controller my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8: Got expected result from replica 1 [my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8-7kgzn]: "my-hostname-basic-00259509-d232-4070-a5d7-1e9e34e1fdd8-7kgzn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:26:59.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7705" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":84,"skipped":1226,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:26:59.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-e992f874-d1a7-49db-a2ba-12bd0cf1ffe2 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:05.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4412" for this suite. • [SLOW TEST:6.148 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1238,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:05.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 21:27:05.912: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 21:27:05.959: INFO: Waiting for terminating namespaces to be deleted... May 5 21:27:05.962: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 21:27:05.967: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:27:05.967: INFO: Container kindnet-cni ready: true, restart count 0 May 5 21:27:05.967: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:27:05.967: INFO: Container kube-proxy ready: true, restart count 0 May 5 21:27:05.967: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 21:27:05.972: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:27:05.972: INFO: Container kube-proxy ready: true, restart count 0 May 5 21:27:05.972: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 21:27:05.972: INFO: Container kube-hunter ready: false, restart count 0 May 5 21:27:05.972: INFO: pod-configmaps-b7a96616-fd45-4e1a-8383-1d7767e629c8 from configmap-4412 started at 2020-05-05 21:26:59 +0000 UTC (2 container statuses recorded) May 5 21:27:05.972: INFO: Container configmap-volume-binary-test ready: false, restart count 0 May 5 21:27:05.972: INFO: Container configmap-volume-data-test ready: true, restart count 0 May 5 21:27:05.972: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 21:27:05.972: INFO: Container kindnet-cni ready: true, restart count 0 May 5 21:27:05.972: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 21:27:05.972: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-eee85591-e049-49b2-a689-71be961ad276 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-eee85591-e049-49b2-a689-71be961ad276 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-eee85591-e049-49b2-a689-71be961ad276 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:22.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6199" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.364 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":86,"skipped":1241,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:22.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 5 21:27:23.624: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 5 21:27:25.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310843, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310843, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310843, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724310843, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:27:28.723: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:27:28.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:30.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3155" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.118 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":87,"skipped":1245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:30.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-b5d984c9-2476-485f-828d-aca0b836f9bf STEP: Creating configMap with name cm-test-opt-upd-0d3042e4-66e4-48bf-975c-5aa41d6b5032 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-b5d984c9-2476-485f-828d-aca0b836f9bf STEP: Updating configmap cm-test-opt-upd-0d3042e4-66e4-48bf-975c-5aa41d6b5032 STEP: Creating configMap with name cm-test-opt-create-403098b3-b95a-47c6-8bba-8612d00f9c6c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:40.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-164" for this suite. • [SLOW TEST:10.371 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:40.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 5 21:27:40.842: INFO: Waiting up to 5m0s for pod "pod-631b0890-3993-41cc-97e6-88c186655dd8" in namespace "emptydir-5420" to be "success or failure" May 5 21:27:40.863: INFO: Pod "pod-631b0890-3993-41cc-97e6-88c186655dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.20313ms May 5 21:27:42.948: INFO: Pod "pod-631b0890-3993-41cc-97e6-88c186655dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105547229s May 5 21:27:44.951: INFO: Pod "pod-631b0890-3993-41cc-97e6-88c186655dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.108987379s STEP: Saw pod success May 5 21:27:44.951: INFO: Pod "pod-631b0890-3993-41cc-97e6-88c186655dd8" satisfied condition "success or failure" May 5 21:27:44.954: INFO: Trying to get logs from node jerma-worker pod pod-631b0890-3993-41cc-97e6-88c186655dd8 container test-container: STEP: delete the pod May 5 21:27:44.998: INFO: Waiting for pod pod-631b0890-3993-41cc-97e6-88c186655dd8 to disappear May 5 21:27:45.036: INFO: Pod pod-631b0890-3993-41cc-97e6-88c186655dd8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:45.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5420" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:45.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7f074438-ef43-47dd-8fb3-f133c0c7d1ff STEP: Creating a pod to test consume secrets May 5 21:27:45.555: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a" in namespace "projected-6887" to be "success or failure" May 5 21:27:45.601: INFO: Pod "pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a": Phase="Pending", Reason="", readiness=false. Elapsed: 46.245443ms May 5 21:27:47.605: INFO: Pod "pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050630728s May 5 21:27:49.610: INFO: Pod "pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05493687s STEP: Saw pod success May 5 21:27:49.610: INFO: Pod "pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a" satisfied condition "success or failure" May 5 21:27:49.613: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a container projected-secret-volume-test: STEP: delete the pod May 5 21:27:49.661: INFO: Waiting for pod pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a to disappear May 5 21:27:49.706: INFO: Pod pod-projected-secrets-16c427f5-9c23-47c0-ad84-c03b1319354a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:49.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6887" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:49.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:27:49.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b" in namespace "downward-api-874" to be "success or failure" May 5 21:27:49.796: INFO: Pod "downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023588ms May 5 21:27:51.800: INFO: Pod "downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008033786s May 5 21:27:53.803: INFO: Pod "downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011285439s STEP: Saw pod success May 5 21:27:53.803: INFO: Pod "downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b" satisfied condition "success or failure" May 5 21:27:53.806: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b container client-container: STEP: delete the pod May 5 21:27:53.839: INFO: Waiting for pod downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b to disappear May 5 21:27:53.850: INFO: Pod downwardapi-volume-6e260fc1-e00c-42f6-aa1e-a940c69dfb4b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:53.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-874" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:53.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-de249a60-d595-4cb9-93a6-cc4013f03649 STEP: Creating a pod to test consume secrets May 5 21:27:54.041: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36" in namespace "projected-8123" to be "success or failure" May 5 21:27:54.059: INFO: Pod "pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36": Phase="Pending", Reason="", readiness=false. Elapsed: 17.825805ms May 5 21:27:56.063: INFO: Pod "pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021560998s May 5 21:27:58.083: INFO: Pod "pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041995166s STEP: Saw pod success May 5 21:27:58.084: INFO: Pod "pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36" satisfied condition "success or failure" May 5 21:27:58.088: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36 container projected-secret-volume-test: STEP: delete the pod May 5 21:27:58.130: INFO: Waiting for pod pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36 to disappear May 5 21:27:58.170: INFO: Pod pod-projected-secrets-9b48d43f-6e43-4b1d-9c9f-93d578b8df36 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:27:58.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8123" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:27:58.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:28:02.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4318" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":93,"skipped":1413,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:28:02.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 5 21:28:02.757: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678341 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 21:28:02.757: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678341 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 5 21:28:12.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678389 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 5 21:28:12.765: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678389 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 5 21:28:22.774: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678419 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 21:28:22.774: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678419 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 5 21:28:32.781: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678449 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 21:28:32.781: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-a 53b88b84-2788-4925-a483-47a69a5b1226 13678449 0 2020-05-05 21:28:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 5 21:28:42.789: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-b 943b52e7-2525-4903-88e7-702b13a9c389 13678480 0 2020-05-05 21:28:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 21:28:42.789: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-b 943b52e7-2525-4903-88e7-702b13a9c389 13678480 0 2020-05-05 21:28:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 5 21:28:52.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-b 943b52e7-2525-4903-88e7-702b13a9c389 13678509 0 2020-05-05 21:28:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 21:28:52.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2083 /api/v1/namespaces/watch-2083/configmaps/e2e-watch-test-configmap-b 943b52e7-2525-4903-88e7-702b13a9c389 13678509 0 2020-05-05 21:28:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:02.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2083" for this suite. • [SLOW TEST:60.126 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":94,"skipped":1417,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:02.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:29:02.874: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154" in namespace "security-context-test-6617" to be "success or failure" May 5 21:29:02.878: INFO: Pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99869ms May 5 21:29:04.898: INFO: Pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023630711s May 5 21:29:06.902: INFO: Pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154": Phase="Running", Reason="", readiness=true. Elapsed: 4.02796313s May 5 21:29:08.906: INFO: Pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031739808s May 5 21:29:08.906: INFO: Pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154" satisfied condition "success or failure" May 5 21:29:08.912: INFO: Got logs for pod "busybox-privileged-false-26f8db7c-e85a-441f-bca4-da7d944ec154": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:08.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6617" for this suite. • [SLOW TEST:6.112 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:08.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 5 21:29:09.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5686' May 5 21:29:12.140: INFO: stderr: "" May 5 21:29:12.140: INFO: stdout: "pod/pause created\n" May 5 21:29:12.140: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 5 21:29:12.140: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5686" to be "running and ready" May 5 21:29:12.171: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.007217ms May 5 21:29:14.234: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093876006s May 5 21:29:16.238: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.098342042s May 5 21:29:16.238: INFO: Pod "pause" satisfied condition "running and ready" May 5 21:29:16.238: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 5 21:29:16.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5686' May 5 21:29:16.353: INFO: stderr: "" May 5 21:29:16.353: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 5 21:29:16.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5686' May 5 21:29:16.446: INFO: stderr: "" May 5 21:29:16.446: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 5 21:29:16.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5686' May 5 21:29:16.549: INFO: stderr: "" May 5 21:29:16.549: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 5 21:29:16.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5686' May 5 21:29:16.646: INFO: stderr: "" May 5 21:29:16.646: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 5 21:29:16.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5686' May 5 21:29:16.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 21:29:16.767: INFO: stdout: "pod \"pause\" force deleted\n" May 5 21:29:16.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5686' May 5 21:29:16.868: INFO: stderr: "No resources found in kubectl-5686 namespace.\n" May 5 21:29:16.868: INFO: stdout: "" May 5 21:29:16.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5686 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 21:29:16.975: INFO: stderr: "" May 5 21:29:16.975: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:16.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5686" for this suite. • [SLOW TEST:8.060 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":96,"skipped":1454,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:16.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3322 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 21:29:17.209: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 21:29:45.419: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.123:8080/dial?request=hostname&protocol=http&host=10.244.1.185&port=8080&tries=1'] Namespace:pod-network-test-3322 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:29:45.419: INFO: >>> kubeConfig: /root/.kube/config I0505 21:29:45.447520 7 log.go:172] (0xc00159a580) (0xc0014dc280) Create stream I0505 21:29:45.447617 7 log.go:172] (0xc00159a580) (0xc0014dc280) Stream added, broadcasting: 1 I0505 21:29:45.449952 7 log.go:172] (0xc00159a580) Reply frame received for 1 I0505 21:29:45.449997 7 log.go:172] (0xc00159a580) (0xc001f0f680) Create stream I0505 21:29:45.450017 7 log.go:172] (0xc00159a580) (0xc001f0f680) Stream added, broadcasting: 3 I0505 21:29:45.451006 7 log.go:172] (0xc00159a580) Reply frame received for 3 I0505 21:29:45.451046 7 log.go:172] (0xc00159a580) (0xc000fb2000) Create stream I0505 21:29:45.451068 7 log.go:172] (0xc00159a580) (0xc000fb2000) Stream added, broadcasting: 5 I0505 21:29:45.451946 7 log.go:172] (0xc00159a580) Reply frame received for 5 I0505 21:29:45.525664 7 log.go:172] (0xc00159a580) Data frame received for 3 I0505 21:29:45.525777 7 log.go:172] (0xc001f0f680) (3) Data frame handling I0505 21:29:45.525829 7 log.go:172] (0xc001f0f680) (3) Data frame sent I0505 21:29:45.526028 7 log.go:172] (0xc00159a580) Data frame received for 5 I0505 21:29:45.526070 7 log.go:172] (0xc000fb2000) (5) Data frame handling I0505 21:29:45.526106 7 log.go:172] (0xc00159a580) Data frame received for 3 I0505 21:29:45.526130 7 log.go:172] (0xc001f0f680) (3) Data frame handling I0505 21:29:45.527753 7 log.go:172] (0xc00159a580) Data frame received for 1 I0505 21:29:45.527779 7 log.go:172] (0xc0014dc280) (1) Data frame handling I0505 21:29:45.527801 7 log.go:172] (0xc0014dc280) (1) Data frame sent I0505 21:29:45.527833 7 log.go:172] (0xc00159a580) (0xc0014dc280) Stream removed, broadcasting: 1 I0505 21:29:45.527887 7 log.go:172] (0xc00159a580) Go away received I0505 21:29:45.528016 7 log.go:172] (0xc00159a580) (0xc0014dc280) Stream removed, broadcasting: 1 I0505 21:29:45.528035 7 log.go:172] (0xc00159a580) (0xc001f0f680) Stream removed, broadcasting: 3 I0505 21:29:45.528051 7 log.go:172] (0xc00159a580) (0xc000fb2000) Stream removed, broadcasting: 5 May 5 21:29:45.528: INFO: Waiting for responses: map[] May 5 21:29:45.531: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.123:8080/dial?request=hostname&protocol=http&host=10.244.2.122&port=8080&tries=1'] Namespace:pod-network-test-3322 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:29:45.531: INFO: >>> kubeConfig: /root/.kube/config I0505 21:29:45.567784 7 log.go:172] (0xc00159ac60) (0xc0014dc780) Create stream I0505 21:29:45.567819 7 log.go:172] (0xc00159ac60) (0xc0014dc780) Stream added, broadcasting: 1 I0505 21:29:45.570623 7 log.go:172] (0xc00159ac60) Reply frame received for 1 I0505 21:29:45.570706 7 log.go:172] (0xc00159ac60) (0xc000fb20a0) Create stream I0505 21:29:45.570737 7 log.go:172] (0xc00159ac60) (0xc000fb20a0) Stream added, broadcasting: 3 I0505 21:29:45.571871 7 log.go:172] (0xc00159ac60) Reply frame received for 3 I0505 21:29:45.571918 7 log.go:172] (0xc00159ac60) (0xc000fb2140) Create stream I0505 21:29:45.571929 7 log.go:172] (0xc00159ac60) (0xc000fb2140) Stream added, broadcasting: 5 I0505 21:29:45.573021 7 log.go:172] (0xc00159ac60) Reply frame received for 5 I0505 21:29:45.642983 7 log.go:172] (0xc00159ac60) Data frame received for 3 I0505 21:29:45.643018 7 log.go:172] (0xc000fb20a0) (3) Data frame handling I0505 21:29:45.643048 7 log.go:172] (0xc000fb20a0) (3) Data frame sent I0505 21:29:45.643341 7 log.go:172] (0xc00159ac60) Data frame received for 3 I0505 21:29:45.643384 7 log.go:172] (0xc000fb20a0) (3) Data frame handling I0505 21:29:45.643408 7 log.go:172] (0xc00159ac60) Data frame received for 5 I0505 21:29:45.643447 7 log.go:172] (0xc000fb2140) (5) Data frame handling I0505 21:29:45.645344 7 log.go:172] (0xc00159ac60) Data frame received for 1 I0505 21:29:45.645375 7 log.go:172] (0xc0014dc780) (1) Data frame handling I0505 21:29:45.645399 7 log.go:172] (0xc0014dc780) (1) Data frame sent I0505 21:29:45.645423 7 log.go:172] (0xc00159ac60) (0xc0014dc780) Stream removed, broadcasting: 1 I0505 21:29:45.645443 7 log.go:172] (0xc00159ac60) Go away received I0505 21:29:45.645546 7 log.go:172] (0xc00159ac60) (0xc0014dc780) Stream removed, broadcasting: 1 I0505 21:29:45.645574 7 log.go:172] (0xc00159ac60) (0xc000fb20a0) Stream removed, broadcasting: 3 I0505 21:29:45.645600 7 log.go:172] (0xc00159ac60) (0xc000fb2140) Stream removed, broadcasting: 5 May 5 21:29:45.645: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:45.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3322" for this suite. • [SLOW TEST:28.678 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:45.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:45.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5632" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":98,"skipped":1481,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:45.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:59.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4362" for this suite. • [SLOW TEST:13.349 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":99,"skipped":1525,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:59.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:29:59.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 5 21:29:59.338: INFO: stderr: "" May 5 21:29:59.338: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T17:27:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:29:59.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6237" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":100,"skipped":1529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:29:59.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-fd52acc3-9184-4557-b844-d5b0c712e4c1 STEP: Creating a pod to test consume secrets May 5 21:29:59.474: INFO: Waiting up to 5m0s for pod "pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51" in namespace "secrets-5112" to be "success or failure" May 5 21:29:59.478: INFO: Pod "pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077292ms May 5 21:30:01.491: INFO: Pod "pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016700878s May 5 21:30:03.495: INFO: Pod "pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02089662s STEP: Saw pod success May 5 21:30:03.495: INFO: Pod "pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51" satisfied condition "success or failure" May 5 21:30:03.498: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51 container secret-volume-test: STEP: delete the pod May 5 21:30:03.583: INFO: Waiting for pod pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51 to disappear May 5 21:30:03.605: INFO: Pod pod-secrets-c69170b8-90b4-4540-ac25-58cb5969ab51 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5112" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1572,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:03.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d0540f3d-5b9b-4a20-897b-af61dbcc4d4d STEP: Creating a pod to test consume configMaps May 5 21:30:03.722: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73" in namespace "projected-6266" to be "success or failure" May 5 21:30:03.731: INFO: Pod "pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73": Phase="Pending", Reason="", readiness=false. Elapsed: 8.936259ms May 5 21:30:05.735: INFO: Pod "pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013005799s May 5 21:30:07.739: INFO: Pod "pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017227583s STEP: Saw pod success May 5 21:30:07.739: INFO: Pod "pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73" satisfied condition "success or failure" May 5 21:30:07.742: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73 container projected-configmap-volume-test: STEP: delete the pod May 5 21:30:07.774: INFO: Waiting for pod pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73 to disappear May 5 21:30:07.781: INFO: Pod pod-projected-configmaps-504da3a6-83bb-4bce-a543-562c79182f73 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6266" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1573,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:07.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 5 21:30:08.047: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1955 /api/v1/namespaces/watch-1955/configmaps/e2e-watch-test-resource-version a0b51721-f851-481b-bec9-b6683eccccb3 13678934 0 2020-05-05 21:30:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 21:30:08.047: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1955 /api/v1/namespaces/watch-1955/configmaps/e2e-watch-test-resource-version a0b51721-f851-481b-bec9-b6683eccccb3 13678935 0 2020-05-05 21:30:07 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:08.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1955" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":103,"skipped":1600,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:08.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:19.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9713" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":104,"skipped":1601,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:19.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 5 21:30:19.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-124' May 5 21:30:19.483: INFO: stderr: "" May 5 21:30:19.483: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 5 21:30:19.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-124' May 5 21:30:19.580: INFO: stderr: "" May 5 21:30:19.580: INFO: stdout: "update-demo-nautilus-2cpdk update-demo-nautilus-vgkdg " May 5 21:30:19.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cpdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-124' May 5 21:30:19.669: INFO: stderr: "" May 5 21:30:19.669: INFO: stdout: "" May 5 21:30:19.669: INFO: update-demo-nautilus-2cpdk is created but not running May 5 21:30:24.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-124' May 5 21:30:24.771: INFO: stderr: "" May 5 21:30:24.771: INFO: stdout: "update-demo-nautilus-2cpdk update-demo-nautilus-vgkdg " May 5 21:30:24.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cpdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-124' May 5 21:30:24.867: INFO: stderr: "" May 5 21:30:24.867: INFO: stdout: "true" May 5 21:30:24.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cpdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-124' May 5 21:30:24.958: INFO: stderr: "" May 5 21:30:24.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:30:24.958: INFO: validating pod update-demo-nautilus-2cpdk May 5 21:30:24.962: INFO: got data: { "image": "nautilus.jpg" } May 5 21:30:24.962: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:30:24.962: INFO: update-demo-nautilus-2cpdk is verified up and running May 5 21:30:24.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgkdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-124' May 5 21:30:25.060: INFO: stderr: "" May 5 21:30:25.060: INFO: stdout: "true" May 5 21:30:25.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgkdg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-124' May 5 21:30:25.158: INFO: stderr: "" May 5 21:30:25.158: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 5 21:30:25.158: INFO: validating pod update-demo-nautilus-vgkdg May 5 21:30:25.162: INFO: got data: { "image": "nautilus.jpg" } May 5 21:30:25.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 5 21:30:25.162: INFO: update-demo-nautilus-vgkdg is verified up and running STEP: using delete to clean up resources May 5 21:30:25.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-124' May 5 21:30:25.261: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 21:30:25.261: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 5 21:30:25.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-124' May 5 21:30:25.370: INFO: stderr: "No resources found in kubectl-124 namespace.\n" May 5 21:30:25.370: INFO: stdout: "" May 5 21:30:25.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-124 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 21:30:25.469: INFO: stderr: "" May 5 21:30:25.469: INFO: stdout: "update-demo-nautilus-2cpdk\nupdate-demo-nautilus-vgkdg\n" May 5 21:30:25.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-124' May 5 21:30:26.075: INFO: stderr: "No resources found in kubectl-124 namespace.\n" May 5 21:30:26.075: INFO: stdout: "" May 5 21:30:26.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-124 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 5 21:30:26.245: INFO: stderr: "" May 5 21:30:26.245: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:26.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-124" for this suite. • [SLOW TEST:7.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":105,"skipped":1604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:26.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:26.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3881" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":106,"skipped":1630,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:26.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-4d427839-4cb5-4916-aad8-64a726ec58d5 STEP: Creating a pod to test consume secrets May 5 21:30:26.962: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca" in namespace "projected-7116" to be "success or failure" May 5 21:30:26.976: INFO: Pod "pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca": Phase="Pending", Reason="", readiness=false. Elapsed: 14.638213ms May 5 21:30:28.980: INFO: Pod "pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017736918s May 5 21:30:30.985: INFO: Pod "pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023578107s STEP: Saw pod success May 5 21:30:30.985: INFO: Pod "pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca" satisfied condition "success or failure" May 5 21:30:30.988: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca container projected-secret-volume-test: STEP: delete the pod May 5 21:30:31.122: INFO: Waiting for pod pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca to disappear May 5 21:30:31.135: INFO: Pod pod-projected-secrets-047df2d0-fecd-40f4-8341-1df89a7e98ca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:31.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7116" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1631,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:31.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:30:31.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d" in namespace "downward-api-2996" to be "success or failure" May 5 21:30:31.195: INFO: Pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.444259ms May 5 21:30:33.265: INFO: Pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073832193s May 5 21:30:35.269: INFO: Pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d": Phase="Running", Reason="", readiness=true. Elapsed: 4.077818717s May 5 21:30:37.274: INFO: Pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082237183s STEP: Saw pod success May 5 21:30:37.274: INFO: Pod "downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d" satisfied condition "success or failure" May 5 21:30:37.277: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d container client-container: STEP: delete the pod May 5 21:30:37.314: INFO: Waiting for pod downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d to disappear May 5 21:30:37.357: INFO: Pod downwardapi-volume-1aa29367-aee0-4504-a159-bc07b2b7cc7d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:37.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2996" for this suite. • [SLOW TEST:6.231 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1638,"failed":0} [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:37.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 21:30:41.960: INFO: Successfully updated pod "annotationupdate389d53d8-e955-4a8f-becb-dd13d4108b41" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:30:46.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8529" for this suite. • [SLOW TEST:8.638 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:30:46.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:31:19.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5332" for this suite. • [SLOW TEST:33.093 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1697,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:31:19.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 5 21:31:20.071: INFO: Pod name wrapped-volume-race-e3652c65-cf96-4b30-8094-5b3fa61bbcfd: Found 0 pods out of 5 May 5 21:31:25.078: INFO: Pod name wrapped-volume-race-e3652c65-cf96-4b30-8094-5b3fa61bbcfd: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e3652c65-cf96-4b30-8094-5b3fa61bbcfd in namespace emptydir-wrapper-6547, will wait for the garbage collector to delete the pods May 5 21:31:39.200: INFO: Deleting ReplicationController wrapped-volume-race-e3652c65-cf96-4b30-8094-5b3fa61bbcfd took: 43.565552ms May 5 21:31:39.500: INFO: Terminating ReplicationController wrapped-volume-race-e3652c65-cf96-4b30-8094-5b3fa61bbcfd pods took: 300.244048ms STEP: Creating RC which spawns configmap-volume pods May 5 21:31:50.337: INFO: Pod name wrapped-volume-race-35e6e7d8-135f-42cd-957f-4d5b65af6051: Found 0 pods out of 5 May 5 21:31:55.353: INFO: Pod name wrapped-volume-race-35e6e7d8-135f-42cd-957f-4d5b65af6051: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-35e6e7d8-135f-42cd-957f-4d5b65af6051 in namespace emptydir-wrapper-6547, will wait for the garbage collector to delete the pods May 5 21:32:11.543: INFO: Deleting ReplicationController wrapped-volume-race-35e6e7d8-135f-42cd-957f-4d5b65af6051 took: 27.427317ms May 5 21:32:11.943: INFO: Terminating ReplicationController wrapped-volume-race-35e6e7d8-135f-42cd-957f-4d5b65af6051 pods took: 400.288521ms STEP: Creating RC which spawns configmap-volume pods May 5 21:32:19.662: INFO: Pod name wrapped-volume-race-bee2cb6a-0a8c-4ce0-bf1a-dc4f3ba6ee8e: Found 0 pods out of 5 May 5 21:32:24.668: INFO: Pod name wrapped-volume-race-bee2cb6a-0a8c-4ce0-bf1a-dc4f3ba6ee8e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bee2cb6a-0a8c-4ce0-bf1a-dc4f3ba6ee8e in namespace emptydir-wrapper-6547, will wait for the garbage collector to delete the pods May 5 21:32:38.809: INFO: Deleting ReplicationController wrapped-volume-race-bee2cb6a-0a8c-4ce0-bf1a-dc4f3ba6ee8e took: 29.022915ms May 5 21:32:39.209: INFO: Terminating ReplicationController wrapped-volume-race-bee2cb6a-0a8c-4ce0-bf1a-dc4f3ba6ee8e pods took: 400.238454ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:32:51.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6547" for this suite. • [SLOW TEST:92.295 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":111,"skipped":1700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:32:51.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-f7ef7fe3-0bda-4558-b500-7bf8749f0b68 STEP: Creating secret with name s-test-opt-upd-7915f12d-5b08-4bee-bf3a-6ba243e62db5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f7ef7fe3-0bda-4558-b500-7bf8749f0b68 STEP: Updating secret s-test-opt-upd-7915f12d-5b08-4bee-bf3a-6ba243e62db5 STEP: Creating secret with name s-test-opt-create-20a3e4d1-1848-4376-a404-82eeb2dc90f2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:12.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7580" for this suite. • [SLOW TEST:80.714 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1738,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:12.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:34:12.196: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 21:34:14.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2343 create -f -' May 5 21:34:17.200: INFO: stderr: "" May 5 21:34:17.200: INFO: stdout: "e2e-test-crd-publish-openapi-5049-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 5 21:34:17.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2343 delete e2e-test-crd-publish-openapi-5049-crds test-cr' May 5 21:34:17.373: INFO: stderr: "" May 5 21:34:17.373: INFO: stdout: "e2e-test-crd-publish-openapi-5049-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 5 21:34:17.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2343 apply -f -' May 5 21:34:17.871: INFO: stderr: "" May 5 21:34:17.871: INFO: stdout: "e2e-test-crd-publish-openapi-5049-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 5 21:34:17.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2343 delete e2e-test-crd-publish-openapi-5049-crds test-cr' May 5 21:34:18.066: INFO: stderr: "" May 5 21:34:18.066: INFO: stdout: "e2e-test-crd-publish-openapi-5049-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 5 21:34:18.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5049-crds' May 5 21:34:18.348: INFO: stderr: "" May 5 21:34:18.348: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5049-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:21.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2343" for this suite. • [SLOW TEST:9.107 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":113,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:21.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:34:22.181: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:34:24.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311262, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311262, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311262, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:34:27.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:27.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7256" for this suite. STEP: Destroying namespace "webhook-7256-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":114,"skipped":1775,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:27.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:31.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-82" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1775,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:31.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:34:32.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:34:34.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:34:36.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311272, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:34:39.311: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:39.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4599" for this suite. STEP: Destroying namespace "webhook-4599-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.603 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":116,"skipped":1842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:40.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 21:34:40.571: INFO: Waiting up to 5m0s for pod "downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a" in namespace "downward-api-2734" to be "success or failure" May 5 21:34:40.771: INFO: Pod "downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 200.235589ms May 5 21:34:42.775: INFO: Pod "downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203923309s May 5 21:34:44.801: INFO: Pod "downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.230109709s STEP: Saw pod success May 5 21:34:44.801: INFO: Pod "downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a" satisfied condition "success or failure" May 5 21:34:44.803: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a container dapi-container: STEP: delete the pod May 5 21:34:44.824: INFO: Waiting for pod downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a to disappear May 5 21:34:44.853: INFO: Pod downward-api-1fe50d92-d44a-4ee7-8d7d-d568ab3e8f6a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:44.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2734" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1865,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:44.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 21:34:44.958: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9309" for this suite. • [SLOW TEST:7.242 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":118,"skipped":1871,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:52.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-315519e9-1d71-4f55-9ac4-1e70b821c6d2 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-315519e9-1d71-4f55-9ac4-1e70b821c6d2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:34:58.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1990" for this suite. • [SLOW TEST:6.185 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1876,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:34:58.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:34:58.398: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 5 21:34:58.405: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:34:58.410: INFO: Number of nodes with available pods: 0 May 5 21:34:58.410: INFO: Node jerma-worker is running more than one daemon pod May 5 21:34:59.416: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:34:59.419: INFO: Number of nodes with available pods: 0 May 5 21:34:59.419: INFO: Node jerma-worker is running more than one daemon pod May 5 21:35:00.935: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:00.939: INFO: Number of nodes with available pods: 0 May 5 21:35:00.939: INFO: Node jerma-worker is running more than one daemon pod May 5 21:35:01.449: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:01.453: INFO: Number of nodes with available pods: 0 May 5 21:35:01.453: INFO: Node jerma-worker is running more than one daemon pod May 5 21:35:02.415: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:02.418: INFO: Number of nodes with available pods: 0 May 5 21:35:02.418: INFO: Node jerma-worker is running more than one daemon pod May 5 21:35:03.415: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:03.418: INFO: Number of nodes with available pods: 0 May 5 21:35:03.418: INFO: Node jerma-worker is running more than one daemon pod May 5 21:35:04.415: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:04.418: INFO: Number of nodes with available pods: 2 May 5 21:35:04.418: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 5 21:35:04.459: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:04.459: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:04.477: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:05.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:05.482: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:05.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:06.480: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:06.480: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:06.483: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:07.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:07.481: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:07.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:08.480: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:08.480: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:08.480: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:08.495: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:09.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:09.482: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:09.482: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:09.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:10.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:10.481: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:10.481: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:10.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:11.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:11.481: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:11.481: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:11.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:12.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:12.482: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:12.482: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:12.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:13.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:13.481: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:13.481: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:13.484: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:15.940: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:15.940: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:15.940: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:15.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:16.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:16.482: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:16.482: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:16.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:17.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:17.481: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:17.481: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:17.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:18.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:18.482: INFO: Wrong image for pod: daemon-set-m9h5s. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:18.482: INFO: Pod daemon-set-m9h5s is not available May 5 21:35:18.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:19.482: INFO: Pod daemon-set-28vjk is not available May 5 21:35:19.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:19.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:20.481: INFO: Pod daemon-set-28vjk is not available May 5 21:35:20.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:20.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:21.496: INFO: Pod daemon-set-28vjk is not available May 5 21:35:21.496: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:21.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:22.481: INFO: Pod daemon-set-28vjk is not available May 5 21:35:22.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:22.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:23.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:23.488: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:24.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:24.481: INFO: Pod daemon-set-7w5fs is not available May 5 21:35:24.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:25.482: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:25.482: INFO: Pod daemon-set-7w5fs is not available May 5 21:35:25.486: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:26.508: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:26.508: INFO: Pod daemon-set-7w5fs is not available May 5 21:35:26.512: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:27.481: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:27.481: INFO: Pod daemon-set-7w5fs is not available May 5 21:35:27.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:28.509: INFO: Wrong image for pod: daemon-set-7w5fs. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 5 21:35:28.509: INFO: Pod daemon-set-7w5fs is not available May 5 21:35:28.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:29.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:30.481: INFO: Pod daemon-set-t7v7c is not available May 5 21:35:30.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 5 21:35:30.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:30.491: INFO: Number of nodes with available pods: 1 May 5 21:35:30.491: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:35:31.497: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:31.500: INFO: Number of nodes with available pods: 1 May 5 21:35:31.500: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:35:32.496: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:35:32.500: INFO: Number of nodes with available pods: 2 May 5 21:35:32.500: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1092, will wait for the garbage collector to delete the pods May 5 21:35:32.590: INFO: Deleting DaemonSet.extensions daemon-set took: 21.27809ms May 5 21:35:32.890: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.282926ms May 5 21:35:36.294: INFO: Number of nodes with available pods: 0 May 5 21:35:36.294: INFO: Number of running nodes: 0, number of available pods: 0 May 5 21:35:36.297: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1092/daemonsets","resourceVersion":"13681379"},"items":null} May 5 21:35:36.300: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1092/pods","resourceVersion":"13681379"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:35:36.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1092" for this suite. • [SLOW TEST:38.028 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":120,"skipped":1882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:35:36.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 21:35:36.420: INFO: Waiting up to 5m0s for pod "pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0" in namespace "emptydir-1004" to be "success or failure" May 5 21:35:36.447: INFO: Pod "pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0": Phase="Pending", Reason="", readiness=false. Elapsed: 27.319127ms May 5 21:35:38.451: INFO: Pod "pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031092102s May 5 21:35:40.456: INFO: Pod "pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035653428s STEP: Saw pod success May 5 21:35:40.456: INFO: Pod "pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0" satisfied condition "success or failure" May 5 21:35:40.459: INFO: Trying to get logs from node jerma-worker2 pod pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0 container test-container: STEP: delete the pod May 5 21:35:40.484: INFO: Waiting for pod pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0 to disappear May 5 21:35:40.489: INFO: Pod pod-9b2b395b-cb39-499c-ad4c-364ee36f6df0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:35:40.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1004" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:35:40.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 5 21:35:40.579: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-494" to be "success or failure" May 5 21:35:40.585: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.059797ms May 5 21:35:42.588: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008435634s May 5 21:35:44.592: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012927878s May 5 21:35:46.597: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017552623s STEP: Saw pod success May 5 21:35:46.597: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 5 21:35:46.601: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 5 21:35:46.640: INFO: Waiting for pod pod-host-path-test to disappear May 5 21:35:46.658: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:35:46.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-494" for this suite. • [SLOW TEST:6.168 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":1980,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:35:46.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:36:46.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3060" for this suite. • [SLOW TEST:60.096 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2027,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:36:46.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 5 21:36:46.922: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:46.938: INFO: Number of nodes with available pods: 0 May 5 21:36:46.938: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:47.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:47.947: INFO: Number of nodes with available pods: 0 May 5 21:36:47.947: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:48.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:48.948: INFO: Number of nodes with available pods: 0 May 5 21:36:48.948: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:49.971: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:49.975: INFO: Number of nodes with available pods: 0 May 5 21:36:49.975: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:50.944: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:50.947: INFO: Number of nodes with available pods: 0 May 5 21:36:50.947: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:51.943: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:51.947: INFO: Number of nodes with available pods: 2 May 5 21:36:51.947: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 5 21:36:51.978: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:52.003: INFO: Number of nodes with available pods: 1 May 5 21:36:52.003: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:53.049: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:53.078: INFO: Number of nodes with available pods: 1 May 5 21:36:53.078: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:54.073: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:54.083: INFO: Number of nodes with available pods: 1 May 5 21:36:54.083: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:55.012: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:55.015: INFO: Number of nodes with available pods: 1 May 5 21:36:55.015: INFO: Node jerma-worker is running more than one daemon pod May 5 21:36:56.008: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:36:56.011: INFO: Number of nodes with available pods: 2 May 5 21:36:56.011: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1674, will wait for the garbage collector to delete the pods May 5 21:36:56.075: INFO: Deleting DaemonSet.extensions daemon-set took: 7.804354ms May 5 21:36:56.376: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.345036ms May 5 21:37:09.580: INFO: Number of nodes with available pods: 0 May 5 21:37:09.580: INFO: Number of running nodes: 0, number of available pods: 0 May 5 21:37:09.582: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1674/daemonsets","resourceVersion":"13681823"},"items":null} May 5 21:37:09.584: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1674/pods","resourceVersion":"13681823"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:09.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1674" for this suite. • [SLOW TEST:22.833 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":124,"skipped":2071,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:09.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4580 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4580 STEP: creating replication controller externalsvc in namespace services-4580 I0505 21:37:09.858057 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4580, replica count: 2 I0505 21:37:12.908466 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 21:37:15.908702 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 5 21:37:15.980: INFO: Creating new exec pod May 5 21:37:19.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4580 execpodvlcx5 -- /bin/sh -x -c nslookup nodeport-service' May 5 21:37:20.250: INFO: stderr: "I0505 21:37:20.125067 2303 log.go:172] (0xc0008469a0) (0xc000840000) Create stream\nI0505 21:37:20.125316 2303 log.go:172] (0xc0008469a0) (0xc000840000) Stream added, broadcasting: 1\nI0505 21:37:20.128152 2303 log.go:172] (0xc0008469a0) Reply frame received for 1\nI0505 21:37:20.128232 2303 log.go:172] (0xc0008469a0) (0xc000636000) Create stream\nI0505 21:37:20.128259 2303 log.go:172] (0xc0008469a0) (0xc000636000) Stream added, broadcasting: 3\nI0505 21:37:20.129508 2303 log.go:172] (0xc0008469a0) Reply frame received for 3\nI0505 21:37:20.129549 2303 log.go:172] (0xc0008469a0) (0xc000636140) Create stream\nI0505 21:37:20.129562 2303 log.go:172] (0xc0008469a0) (0xc000636140) Stream added, broadcasting: 5\nI0505 21:37:20.130513 2303 log.go:172] (0xc0008469a0) Reply frame received for 5\nI0505 21:37:20.231994 2303 log.go:172] (0xc0008469a0) Data frame received for 5\nI0505 21:37:20.232027 2303 log.go:172] (0xc000636140) (5) Data frame handling\nI0505 21:37:20.232047 2303 log.go:172] (0xc000636140) (5) Data frame sent\n+ nslookup nodeport-service\nI0505 21:37:20.240843 2303 log.go:172] (0xc0008469a0) Data frame received for 3\nI0505 21:37:20.240876 2303 log.go:172] (0xc000636000) (3) Data frame handling\nI0505 21:37:20.240906 2303 log.go:172] (0xc000636000) (3) Data frame sent\nI0505 21:37:20.242300 2303 log.go:172] (0xc0008469a0) Data frame received for 3\nI0505 21:37:20.242344 2303 log.go:172] (0xc000636000) (3) Data frame handling\nI0505 21:37:20.242384 2303 log.go:172] (0xc000636000) (3) Data frame sent\nI0505 21:37:20.243087 2303 log.go:172] (0xc0008469a0) Data frame received for 3\nI0505 21:37:20.243106 2303 log.go:172] (0xc000636000) (3) Data frame handling\nI0505 21:37:20.243143 2303 log.go:172] (0xc0008469a0) Data frame received for 5\nI0505 21:37:20.243161 2303 log.go:172] (0xc000636140) (5) Data frame handling\nI0505 21:37:20.244876 2303 log.go:172] (0xc0008469a0) Data frame received for 1\nI0505 21:37:20.244913 2303 log.go:172] (0xc000840000) (1) Data frame handling\nI0505 21:37:20.244941 2303 log.go:172] (0xc000840000) (1) Data frame sent\nI0505 21:37:20.245096 2303 log.go:172] (0xc0008469a0) (0xc000840000) Stream removed, broadcasting: 1\nI0505 21:37:20.245377 2303 log.go:172] (0xc0008469a0) Go away received\nI0505 21:37:20.245599 2303 log.go:172] (0xc0008469a0) (0xc000840000) Stream removed, broadcasting: 1\nI0505 21:37:20.245613 2303 log.go:172] (0xc0008469a0) (0xc000636000) Stream removed, broadcasting: 3\nI0505 21:37:20.245624 2303 log.go:172] (0xc0008469a0) (0xc000636140) Stream removed, broadcasting: 5\n" May 5 21:37:20.250: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4580.svc.cluster.local\tcanonical name = externalsvc.services-4580.svc.cluster.local.\nName:\texternalsvc.services-4580.svc.cluster.local\nAddress: 10.109.243.228\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4580, will wait for the garbage collector to delete the pods May 5 21:37:20.309: INFO: Deleting ReplicationController externalsvc took: 5.839287ms May 5 21:37:20.410: INFO: Terminating ReplicationController externalsvc pods took: 100.222162ms May 5 21:37:29.637: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:29.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4580" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:20.116 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":125,"skipped":2082,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:29.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-3 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3 STEP: Deleting pre-stop pod May 5 21:37:42.818: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:42.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3" for this suite. • [SLOW TEST:13.182 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":126,"skipped":2084,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:42.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 5 21:37:42.951: INFO: Waiting up to 5m0s for pod "var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde" in namespace "var-expansion-6866" to be "success or failure" May 5 21:37:42.954: INFO: Pod "var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384508ms May 5 21:37:44.983: INFO: Pod "var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032108438s May 5 21:37:46.987: INFO: Pod "var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035902s STEP: Saw pod success May 5 21:37:46.987: INFO: Pod "var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde" satisfied condition "success or failure" May 5 21:37:46.989: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde container dapi-container: STEP: delete the pod May 5 21:37:47.038: INFO: Waiting for pod var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde to disappear May 5 21:37:47.097: INFO: Pod var-expansion-13a68383-395a-42b3-a3a7-5abc5df45dde no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:47.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6866" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2085,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:47.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 5 21:37:47.190: INFO: Waiting up to 5m0s for pod "pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37" in namespace "emptydir-4334" to be "success or failure" May 5 21:37:47.193: INFO: Pod "pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.367832ms May 5 21:37:49.197: INFO: Pod "pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006817653s May 5 21:37:51.201: INFO: Pod "pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01069681s STEP: Saw pod success May 5 21:37:51.201: INFO: Pod "pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37" satisfied condition "success or failure" May 5 21:37:51.203: INFO: Trying to get logs from node jerma-worker pod pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37 container test-container: STEP: delete the pod May 5 21:37:51.255: INFO: Waiting for pod pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37 to disappear May 5 21:37:51.265: INFO: Pod pod-305ee21f-9aca-4bba-8160-1b03fb5bdf37 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:51.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4334" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2093,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:51.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 21:37:55.411: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:37:55.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7483" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2094,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:37:55.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:37:55.561: INFO: Create a RollingUpdate DaemonSet May 5 21:37:55.564: INFO: Check that daemon pods launch on every node of the cluster May 5 21:37:55.571: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:37:55.589: INFO: Number of nodes with available pods: 0 May 5 21:37:55.589: INFO: Node jerma-worker is running more than one daemon pod May 5 21:37:56.667: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:37:56.671: INFO: Number of nodes with available pods: 0 May 5 21:37:56.671: INFO: Node jerma-worker is running more than one daemon pod May 5 21:37:57.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:37:57.629: INFO: Number of nodes with available pods: 0 May 5 21:37:57.629: INFO: Node jerma-worker is running more than one daemon pod May 5 21:37:58.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:37:58.610: INFO: Number of nodes with available pods: 0 May 5 21:37:58.610: INFO: Node jerma-worker is running more than one daemon pod May 5 21:37:59.597: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:37:59.602: INFO: Number of nodes with available pods: 1 May 5 21:37:59.602: INFO: Node jerma-worker2 is running more than one daemon pod May 5 21:38:00.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:38:00.628: INFO: Number of nodes with available pods: 2 May 5 21:38:00.628: INFO: Number of running nodes: 2, number of available pods: 2 May 5 21:38:00.628: INFO: Update the DaemonSet to trigger a rollout May 5 21:38:00.633: INFO: Updating DaemonSet daemon-set May 5 21:38:09.657: INFO: Roll back the DaemonSet before rollout is complete May 5 21:38:09.662: INFO: Updating DaemonSet daemon-set May 5 21:38:09.662: INFO: Make sure DaemonSet rollback is complete May 5 21:38:09.673: INFO: Wrong image for pod: daemon-set-dfspm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 5 21:38:09.673: INFO: Pod daemon-set-dfspm is not available May 5 21:38:09.703: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:38:10.707: INFO: Wrong image for pod: daemon-set-dfspm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 5 21:38:10.707: INFO: Pod daemon-set-dfspm is not available May 5 21:38:10.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:38:11.756: INFO: Wrong image for pod: daemon-set-dfspm. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 5 21:38:11.756: INFO: Pod daemon-set-dfspm is not available May 5 21:38:11.759: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 5 21:38:12.708: INFO: Pod daemon-set-t4sbl is not available May 5 21:38:12.712: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2785, will wait for the garbage collector to delete the pods May 5 21:38:12.777: INFO: Deleting DaemonSet.extensions daemon-set took: 6.37829ms May 5 21:38:13.078: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.251776ms May 5 21:38:15.581: INFO: Number of nodes with available pods: 0 May 5 21:38:15.581: INFO: Number of running nodes: 0, number of available pods: 0 May 5 21:38:15.606: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2785/daemonsets","resourceVersion":"13682315"},"items":null} May 5 21:38:15.609: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2785/pods","resourceVersion":"13682315"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:15.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2785" for this suite. • [SLOW TEST:20.184 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":130,"skipped":2112,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:15.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-1553d186-fb0b-49b6-a56a-6198644d9fc8 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:15.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4947" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":131,"skipped":2118,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:15.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 5 21:38:16.292: INFO: created pod pod-service-account-defaultsa May 5 21:38:16.293: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 5 21:38:16.337: INFO: created pod pod-service-account-mountsa May 5 21:38:16.337: INFO: pod pod-service-account-mountsa service account token volume mount: true May 5 21:38:16.364: INFO: created pod pod-service-account-nomountsa May 5 21:38:16.364: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 5 21:38:16.393: INFO: created pod pod-service-account-defaultsa-mountspec May 5 21:38:16.393: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 5 21:38:16.476: INFO: created pod pod-service-account-mountsa-mountspec May 5 21:38:16.476: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 5 21:38:16.520: INFO: created pod pod-service-account-nomountsa-mountspec May 5 21:38:16.520: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 5 21:38:16.607: INFO: created pod pod-service-account-defaultsa-nomountspec May 5 21:38:16.607: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 5 21:38:16.623: INFO: created pod pod-service-account-mountsa-nomountspec May 5 21:38:16.623: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 5 21:38:16.678: INFO: created pod pod-service-account-nomountsa-nomountspec May 5 21:38:16.678: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:16.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4106" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":132,"skipped":2126,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:16.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 5 21:38:16.943: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 5 21:38:29.047: INFO: >>> kubeConfig: /root/.kube/config May 5 21:38:31.714: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:42.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6679" for this suite. • [SLOW TEST:25.344 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":133,"skipped":2130,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:42.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:38:42.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8146' May 5 21:38:42.373: INFO: stderr: "" May 5 21:38:42.373: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 5 21:38:42.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8146' May 5 21:38:49.232: INFO: stderr: "" May 5 21:38:49.232: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8146" for this suite. • [SLOW TEST:7.024 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":134,"skipped":2147,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:49.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8f2e2362-4fe4-4e99-9303-2cfec87fa071 STEP: Creating a pod to test consume secrets May 5 21:38:49.319: INFO: Waiting up to 5m0s for pod "pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e" in namespace "secrets-1079" to be "success or failure" May 5 21:38:49.338: INFO: Pod "pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.772259ms May 5 21:38:51.343: INFO: Pod "pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023602355s May 5 21:38:53.347: INFO: Pod "pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027718219s STEP: Saw pod success May 5 21:38:53.347: INFO: Pod "pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e" satisfied condition "success or failure" May 5 21:38:53.350: INFO: Trying to get logs from node jerma-worker pod pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e container secret-env-test: STEP: delete the pod May 5 21:38:53.495: INFO: Waiting for pod pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e to disappear May 5 21:38:53.543: INFO: Pod pod-secrets-9723fa07-8f20-4986-af28-e1f95dc52f0e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:53.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1079" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:53.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:53.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3751" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":136,"skipped":2200,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:53.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 5 21:38:53.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 5 21:38:53.879: INFO: stderr: "" May 5 21:38:53.879: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:38:53.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3168" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":137,"skipped":2205,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:38:53.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3392 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3392 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3392 May 5 21:38:54.043: INFO: Found 0 stateful pods, waiting for 1 May 5 21:39:04.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 5 21:39:04.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:39:04.353: INFO: stderr: "I0505 21:39:04.186462 2381 log.go:172] (0xc0003be000) (0xc00095e0a0) Create stream\nI0505 21:39:04.186512 2381 log.go:172] (0xc0003be000) (0xc00095e0a0) Stream added, broadcasting: 1\nI0505 21:39:04.188794 2381 log.go:172] (0xc0003be000) Reply frame received for 1\nI0505 21:39:04.188822 2381 log.go:172] (0xc0003be000) (0xc00078d4a0) Create stream\nI0505 21:39:04.188832 2381 log.go:172] (0xc0003be000) (0xc00078d4a0) Stream added, broadcasting: 3\nI0505 21:39:04.190102 2381 log.go:172] (0xc0003be000) Reply frame received for 3\nI0505 21:39:04.190148 2381 log.go:172] (0xc0003be000) (0xc00095e140) Create stream\nI0505 21:39:04.190161 2381 log.go:172] (0xc0003be000) (0xc00095e140) Stream added, broadcasting: 5\nI0505 21:39:04.191346 2381 log.go:172] (0xc0003be000) Reply frame received for 5\nI0505 21:39:04.300731 2381 log.go:172] (0xc0003be000) Data frame received for 5\nI0505 21:39:04.300756 2381 log.go:172] (0xc00095e140) (5) Data frame handling\nI0505 21:39:04.300772 2381 log.go:172] (0xc00095e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:39:04.344968 2381 log.go:172] (0xc0003be000) Data frame received for 3\nI0505 21:39:04.345013 2381 log.go:172] (0xc00078d4a0) (3) Data frame handling\nI0505 21:39:04.345051 2381 log.go:172] (0xc00078d4a0) (3) Data frame sent\nI0505 21:39:04.345309 2381 log.go:172] (0xc0003be000) Data frame received for 5\nI0505 21:39:04.345330 2381 log.go:172] (0xc00095e140) (5) Data frame handling\nI0505 21:39:04.345594 2381 log.go:172] (0xc0003be000) Data frame received for 3\nI0505 21:39:04.345618 2381 log.go:172] (0xc00078d4a0) (3) Data frame handling\nI0505 21:39:04.348040 2381 log.go:172] (0xc0003be000) Data frame received for 1\nI0505 21:39:04.348063 2381 log.go:172] (0xc00095e0a0) (1) Data frame handling\nI0505 21:39:04.348082 2381 log.go:172] (0xc00095e0a0) (1) Data frame sent\nI0505 21:39:04.348176 2381 log.go:172] (0xc0003be000) (0xc00095e0a0) Stream removed, broadcasting: 1\nI0505 21:39:04.348222 2381 log.go:172] (0xc0003be000) Go away received\nI0505 21:39:04.348728 2381 log.go:172] (0xc0003be000) (0xc00095e0a0) Stream removed, broadcasting: 1\nI0505 21:39:04.348752 2381 log.go:172] (0xc0003be000) (0xc00078d4a0) Stream removed, broadcasting: 3\nI0505 21:39:04.348766 2381 log.go:172] (0xc0003be000) (0xc00095e140) Stream removed, broadcasting: 5\n" May 5 21:39:04.353: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:39:04.354: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 21:39:04.357: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 5 21:39:14.362: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 21:39:14.362: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:39:14.398: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999472s May 5 21:39:15.401: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.973766466s May 5 21:39:16.406: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970400192s May 5 21:39:17.427: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.965359309s May 5 21:39:18.431: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.944758434s May 5 21:39:19.435: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.940777593s May 5 21:39:20.439: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.936584493s May 5 21:39:21.463: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.932240239s May 5 21:39:22.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.908647846s May 5 21:39:23.472: INFO: Verifying statefulset ss doesn't scale past 1 for another 904.090034ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3392 May 5 21:39:24.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:39:24.718: INFO: stderr: "I0505 21:39:24.625739 2402 log.go:172] (0xc000a46000) (0xc00096e000) Create stream\nI0505 21:39:24.625829 2402 log.go:172] (0xc000a46000) (0xc00096e000) Stream added, broadcasting: 1\nI0505 21:39:24.628449 2402 log.go:172] (0xc000a46000) Reply frame received for 1\nI0505 21:39:24.628501 2402 log.go:172] (0xc000a46000) (0xc000b58000) Create stream\nI0505 21:39:24.628523 2402 log.go:172] (0xc000a46000) (0xc000b58000) Stream added, broadcasting: 3\nI0505 21:39:24.629777 2402 log.go:172] (0xc000a46000) Reply frame received for 3\nI0505 21:39:24.629806 2402 log.go:172] (0xc000a46000) (0xc00096e0a0) Create stream\nI0505 21:39:24.629814 2402 log.go:172] (0xc000a46000) (0xc00096e0a0) Stream added, broadcasting: 5\nI0505 21:39:24.631024 2402 log.go:172] (0xc000a46000) Reply frame received for 5\nI0505 21:39:24.710182 2402 log.go:172] (0xc000a46000) Data frame received for 3\nI0505 21:39:24.710224 2402 log.go:172] (0xc000b58000) (3) Data frame handling\nI0505 21:39:24.710241 2402 log.go:172] (0xc000b58000) (3) Data frame sent\nI0505 21:39:24.710253 2402 log.go:172] (0xc000a46000) Data frame received for 3\nI0505 21:39:24.710263 2402 log.go:172] (0xc000b58000) (3) Data frame handling\nI0505 21:39:24.710296 2402 log.go:172] (0xc000a46000) Data frame received for 5\nI0505 21:39:24.710321 2402 log.go:172] (0xc00096e0a0) (5) Data frame handling\nI0505 21:39:24.710361 2402 log.go:172] (0xc00096e0a0) (5) Data frame sent\nI0505 21:39:24.710382 2402 log.go:172] (0xc000a46000) Data frame received for 5\nI0505 21:39:24.710400 2402 log.go:172] (0xc00096e0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:39:24.712059 2402 log.go:172] (0xc000a46000) Data frame received for 1\nI0505 21:39:24.712096 2402 log.go:172] (0xc00096e000) (1) Data frame handling\nI0505 21:39:24.712119 2402 log.go:172] (0xc00096e000) (1) Data frame sent\nI0505 21:39:24.712141 2402 log.go:172] (0xc000a46000) (0xc00096e000) Stream removed, broadcasting: 1\nI0505 21:39:24.712192 2402 log.go:172] (0xc000a46000) Go away received\nI0505 21:39:24.712718 2402 log.go:172] (0xc000a46000) (0xc00096e000) Stream removed, broadcasting: 1\nI0505 21:39:24.712754 2402 log.go:172] (0xc000a46000) (0xc000b58000) Stream removed, broadcasting: 3\nI0505 21:39:24.712776 2402 log.go:172] (0xc000a46000) (0xc00096e0a0) Stream removed, broadcasting: 5\n" May 5 21:39:24.718: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:39:24.718: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:39:24.724: INFO: Found 1 stateful pods, waiting for 3 May 5 21:39:34.729: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 5 21:39:34.729: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 5 21:39:34.729: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 5 21:39:34.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:39:34.961: INFO: stderr: "I0505 21:39:34.871497 2423 log.go:172] (0xc000a52000) (0xc000ba20a0) Create stream\nI0505 21:39:34.871551 2423 log.go:172] (0xc000a52000) (0xc000ba20a0) Stream added, broadcasting: 1\nI0505 21:39:34.873067 2423 log.go:172] (0xc000a52000) Reply frame received for 1\nI0505 21:39:34.873103 2423 log.go:172] (0xc000a52000) (0xc000ab8000) Create stream\nI0505 21:39:34.873230 2423 log.go:172] (0xc000a52000) (0xc000ab8000) Stream added, broadcasting: 3\nI0505 21:39:34.874085 2423 log.go:172] (0xc000a52000) Reply frame received for 3\nI0505 21:39:34.874108 2423 log.go:172] (0xc000a52000) (0xc000ba2140) Create stream\nI0505 21:39:34.874114 2423 log.go:172] (0xc000a52000) (0xc000ba2140) Stream added, broadcasting: 5\nI0505 21:39:34.874838 2423 log.go:172] (0xc000a52000) Reply frame received for 5\nI0505 21:39:34.955240 2423 log.go:172] (0xc000a52000) Data frame received for 3\nI0505 21:39:34.955267 2423 log.go:172] (0xc000ab8000) (3) Data frame handling\nI0505 21:39:34.955286 2423 log.go:172] (0xc000ab8000) (3) Data frame sent\nI0505 21:39:34.955294 2423 log.go:172] (0xc000a52000) Data frame received for 3\nI0505 21:39:34.955301 2423 log.go:172] (0xc000ab8000) (3) Data frame handling\nI0505 21:39:34.955441 2423 log.go:172] (0xc000a52000) Data frame received for 5\nI0505 21:39:34.955459 2423 log.go:172] (0xc000ba2140) (5) Data frame handling\nI0505 21:39:34.955473 2423 log.go:172] (0xc000ba2140) (5) Data frame sent\nI0505 21:39:34.955479 2423 log.go:172] (0xc000a52000) Data frame received for 5\nI0505 21:39:34.955484 2423 log.go:172] (0xc000ba2140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:39:34.957366 2423 log.go:172] (0xc000a52000) Data frame received for 1\nI0505 21:39:34.957377 2423 log.go:172] (0xc000ba20a0) (1) Data frame handling\nI0505 21:39:34.957383 2423 log.go:172] (0xc000ba20a0) (1) Data frame sent\nI0505 21:39:34.957705 2423 log.go:172] (0xc000a52000) (0xc000ba20a0) Stream removed, broadcasting: 1\nI0505 21:39:34.957726 2423 log.go:172] (0xc000a52000) Go away received\nI0505 21:39:34.958130 2423 log.go:172] (0xc000a52000) (0xc000ba20a0) Stream removed, broadcasting: 1\nI0505 21:39:34.958144 2423 log.go:172] (0xc000a52000) (0xc000ab8000) Stream removed, broadcasting: 3\nI0505 21:39:34.958152 2423 log.go:172] (0xc000a52000) (0xc000ba2140) Stream removed, broadcasting: 5\n" May 5 21:39:34.962: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:39:34.962: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 21:39:34.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:39:35.214: INFO: stderr: "I0505 21:39:35.103742 2443 log.go:172] (0xc000a2f1e0) (0xc000c12280) Create stream\nI0505 21:39:35.103810 2443 log.go:172] (0xc000a2f1e0) (0xc000c12280) Stream added, broadcasting: 1\nI0505 21:39:35.108491 2443 log.go:172] (0xc000a2f1e0) Reply frame received for 1\nI0505 21:39:35.108525 2443 log.go:172] (0xc000a2f1e0) (0xc000c12320) Create stream\nI0505 21:39:35.108531 2443 log.go:172] (0xc000a2f1e0) (0xc000c12320) Stream added, broadcasting: 3\nI0505 21:39:35.109440 2443 log.go:172] (0xc000a2f1e0) Reply frame received for 3\nI0505 21:39:35.109462 2443 log.go:172] (0xc000a2f1e0) (0xc000c123c0) Create stream\nI0505 21:39:35.109467 2443 log.go:172] (0xc000a2f1e0) (0xc000c123c0) Stream added, broadcasting: 5\nI0505 21:39:35.110024 2443 log.go:172] (0xc000a2f1e0) Reply frame received for 5\nI0505 21:39:35.164897 2443 log.go:172] (0xc000a2f1e0) Data frame received for 5\nI0505 21:39:35.164928 2443 log.go:172] (0xc000c123c0) (5) Data frame handling\nI0505 21:39:35.164961 2443 log.go:172] (0xc000c123c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:39:35.207822 2443 log.go:172] (0xc000a2f1e0) Data frame received for 3\nI0505 21:39:35.207847 2443 log.go:172] (0xc000c12320) (3) Data frame handling\nI0505 21:39:35.207859 2443 log.go:172] (0xc000c12320) (3) Data frame sent\nI0505 21:39:35.207953 2443 log.go:172] (0xc000a2f1e0) Data frame received for 5\nI0505 21:39:35.207967 2443 log.go:172] (0xc000c123c0) (5) Data frame handling\nI0505 21:39:35.208430 2443 log.go:172] (0xc000a2f1e0) Data frame received for 3\nI0505 21:39:35.208455 2443 log.go:172] (0xc000c12320) (3) Data frame handling\nI0505 21:39:35.210485 2443 log.go:172] (0xc000a2f1e0) Data frame received for 1\nI0505 21:39:35.210512 2443 log.go:172] (0xc000c12280) (1) Data frame handling\nI0505 21:39:35.210526 2443 log.go:172] (0xc000c12280) (1) Data frame sent\nI0505 21:39:35.210545 2443 log.go:172] (0xc000a2f1e0) (0xc000c12280) Stream removed, broadcasting: 1\nI0505 21:39:35.210585 2443 log.go:172] (0xc000a2f1e0) Go away received\nI0505 21:39:35.210939 2443 log.go:172] (0xc000a2f1e0) (0xc000c12280) Stream removed, broadcasting: 1\nI0505 21:39:35.210956 2443 log.go:172] (0xc000a2f1e0) (0xc000c12320) Stream removed, broadcasting: 3\nI0505 21:39:35.210962 2443 log.go:172] (0xc000a2f1e0) (0xc000c123c0) Stream removed, broadcasting: 5\n" May 5 21:39:35.214: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:39:35.214: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 21:39:35.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 21:39:35.456: INFO: stderr: "I0505 21:39:35.356257 2464 log.go:172] (0xc000105340) (0xc0006bbd60) Create stream\nI0505 21:39:35.356336 2464 log.go:172] (0xc000105340) (0xc0006bbd60) Stream added, broadcasting: 1\nI0505 21:39:35.359411 2464 log.go:172] (0xc000105340) Reply frame received for 1\nI0505 21:39:35.359466 2464 log.go:172] (0xc000105340) (0xc0006bbe00) Create stream\nI0505 21:39:35.359483 2464 log.go:172] (0xc000105340) (0xc0006bbe00) Stream added, broadcasting: 3\nI0505 21:39:35.360464 2464 log.go:172] (0xc000105340) Reply frame received for 3\nI0505 21:39:35.360503 2464 log.go:172] (0xc000105340) (0xc0006bbea0) Create stream\nI0505 21:39:35.360516 2464 log.go:172] (0xc000105340) (0xc0006bbea0) Stream added, broadcasting: 5\nI0505 21:39:35.361985 2464 log.go:172] (0xc000105340) Reply frame received for 5\nI0505 21:39:35.421876 2464 log.go:172] (0xc000105340) Data frame received for 5\nI0505 21:39:35.421906 2464 log.go:172] (0xc0006bbea0) (5) Data frame handling\nI0505 21:39:35.421927 2464 log.go:172] (0xc0006bbea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 21:39:35.447750 2464 log.go:172] (0xc000105340) Data frame received for 3\nI0505 21:39:35.447784 2464 log.go:172] (0xc0006bbe00) (3) Data frame handling\nI0505 21:39:35.447835 2464 log.go:172] (0xc0006bbe00) (3) Data frame sent\nI0505 21:39:35.447924 2464 log.go:172] (0xc000105340) Data frame received for 3\nI0505 21:39:35.447948 2464 log.go:172] (0xc0006bbe00) (3) Data frame handling\nI0505 21:39:35.448164 2464 log.go:172] (0xc000105340) Data frame received for 5\nI0505 21:39:35.448193 2464 log.go:172] (0xc0006bbea0) (5) Data frame handling\nI0505 21:39:35.450067 2464 log.go:172] (0xc000105340) Data frame received for 1\nI0505 21:39:35.450119 2464 log.go:172] (0xc0006bbd60) (1) Data frame handling\nI0505 21:39:35.450151 2464 log.go:172] (0xc0006bbd60) (1) Data frame sent\nI0505 21:39:35.450182 2464 log.go:172] (0xc000105340) (0xc0006bbd60) Stream removed, broadcasting: 1\nI0505 21:39:35.450215 2464 log.go:172] (0xc000105340) Go away received\nI0505 21:39:35.450680 2464 log.go:172] (0xc000105340) (0xc0006bbd60) Stream removed, broadcasting: 1\nI0505 21:39:35.450703 2464 log.go:172] (0xc000105340) (0xc0006bbe00) Stream removed, broadcasting: 3\nI0505 21:39:35.450715 2464 log.go:172] (0xc000105340) (0xc0006bbea0) Stream removed, broadcasting: 5\n" May 5 21:39:35.456: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 21:39:35.456: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 21:39:35.456: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:39:35.460: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 5 21:39:45.472: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 21:39:45.472: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 5 21:39:45.472: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 5 21:39:45.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999739s May 5 21:39:46.496: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.99236752s May 5 21:39:47.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987459186s May 5 21:39:48.506: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.982662143s May 5 21:39:49.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.977749005s May 5 21:39:50.530: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972633326s May 5 21:39:51.534: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.953272089s May 5 21:39:52.540: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.948811097s May 5 21:39:53.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943667847s May 5 21:39:54.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 938.792103ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3392 May 5 21:39:55.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:39:55.762: INFO: stderr: "I0505 21:39:55.688846 2485 log.go:172] (0xc000544dc0) (0xc00063dae0) Create stream\nI0505 21:39:55.688914 2485 log.go:172] (0xc000544dc0) (0xc00063dae0) Stream added, broadcasting: 1\nI0505 21:39:55.691576 2485 log.go:172] (0xc000544dc0) Reply frame received for 1\nI0505 21:39:55.691642 2485 log.go:172] (0xc000544dc0) (0xc000a20000) Create stream\nI0505 21:39:55.691675 2485 log.go:172] (0xc000544dc0) (0xc000a20000) Stream added, broadcasting: 3\nI0505 21:39:55.692602 2485 log.go:172] (0xc000544dc0) Reply frame received for 3\nI0505 21:39:55.692643 2485 log.go:172] (0xc000544dc0) (0xc00063dcc0) Create stream\nI0505 21:39:55.692661 2485 log.go:172] (0xc000544dc0) (0xc00063dcc0) Stream added, broadcasting: 5\nI0505 21:39:55.693808 2485 log.go:172] (0xc000544dc0) Reply frame received for 5\nI0505 21:39:55.753725 2485 log.go:172] (0xc000544dc0) Data frame received for 3\nI0505 21:39:55.753793 2485 log.go:172] (0xc000a20000) (3) Data frame handling\nI0505 21:39:55.753817 2485 log.go:172] (0xc000a20000) (3) Data frame sent\nI0505 21:39:55.753831 2485 log.go:172] (0xc000544dc0) Data frame received for 3\nI0505 21:39:55.753842 2485 log.go:172] (0xc000a20000) (3) Data frame handling\nI0505 21:39:55.753930 2485 log.go:172] (0xc000544dc0) Data frame received for 5\nI0505 21:39:55.753983 2485 log.go:172] (0xc00063dcc0) (5) Data frame handling\nI0505 21:39:55.754012 2485 log.go:172] (0xc00063dcc0) (5) Data frame sent\nI0505 21:39:55.754030 2485 log.go:172] (0xc000544dc0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:39:55.754043 2485 log.go:172] (0xc00063dcc0) (5) Data frame handling\nI0505 21:39:55.755546 2485 log.go:172] (0xc000544dc0) Data frame received for 1\nI0505 21:39:55.755576 2485 log.go:172] (0xc00063dae0) (1) Data frame handling\nI0505 21:39:55.755605 2485 log.go:172] (0xc00063dae0) (1) Data frame sent\nI0505 21:39:55.755628 2485 log.go:172] (0xc000544dc0) (0xc00063dae0) Stream removed, broadcasting: 1\nI0505 21:39:55.755653 2485 log.go:172] (0xc000544dc0) Go away received\nI0505 21:39:55.756139 2485 log.go:172] (0xc000544dc0) (0xc00063dae0) Stream removed, broadcasting: 1\nI0505 21:39:55.756162 2485 log.go:172] (0xc000544dc0) (0xc000a20000) Stream removed, broadcasting: 3\nI0505 21:39:55.756175 2485 log.go:172] (0xc000544dc0) (0xc00063dcc0) Stream removed, broadcasting: 5\n" May 5 21:39:55.762: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:39:55.762: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:39:55.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:39:56.001: INFO: stderr: "I0505 21:39:55.927305 2508 log.go:172] (0xc0009e6d10) (0xc0009e40a0) Create stream\nI0505 21:39:55.927364 2508 log.go:172] (0xc0009e6d10) (0xc0009e40a0) Stream added, broadcasting: 1\nI0505 21:39:55.929082 2508 log.go:172] (0xc0009e6d10) Reply frame received for 1\nI0505 21:39:55.929599 2508 log.go:172] (0xc0009e6d10) (0xc000abc000) Create stream\nI0505 21:39:55.929639 2508 log.go:172] (0xc0009e6d10) (0xc000abc000) Stream added, broadcasting: 3\nI0505 21:39:55.930715 2508 log.go:172] (0xc0009e6d10) Reply frame received for 3\nI0505 21:39:55.930739 2508 log.go:172] (0xc0009e6d10) (0xc000abc0a0) Create stream\nI0505 21:39:55.930745 2508 log.go:172] (0xc0009e6d10) (0xc000abc0a0) Stream added, broadcasting: 5\nI0505 21:39:55.931420 2508 log.go:172] (0xc0009e6d10) Reply frame received for 5\nI0505 21:39:55.994676 2508 log.go:172] (0xc0009e6d10) Data frame received for 5\nI0505 21:39:55.994706 2508 log.go:172] (0xc000abc0a0) (5) Data frame handling\nI0505 21:39:55.994719 2508 log.go:172] (0xc000abc0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:39:55.994738 2508 log.go:172] (0xc0009e6d10) Data frame received for 3\nI0505 21:39:55.994747 2508 log.go:172] (0xc000abc000) (3) Data frame handling\nI0505 21:39:55.994752 2508 log.go:172] (0xc000abc000) (3) Data frame sent\nI0505 21:39:55.994838 2508 log.go:172] (0xc0009e6d10) Data frame received for 5\nI0505 21:39:55.994862 2508 log.go:172] (0xc000abc0a0) (5) Data frame handling\nI0505 21:39:55.994883 2508 log.go:172] (0xc0009e6d10) Data frame received for 3\nI0505 21:39:55.994906 2508 log.go:172] (0xc000abc000) (3) Data frame handling\nI0505 21:39:55.996257 2508 log.go:172] (0xc0009e6d10) Data frame received for 1\nI0505 21:39:55.996280 2508 log.go:172] (0xc0009e40a0) (1) Data frame handling\nI0505 21:39:55.996296 2508 log.go:172] (0xc0009e40a0) (1) Data frame sent\nI0505 21:39:55.996448 2508 log.go:172] (0xc0009e6d10) (0xc0009e40a0) Stream removed, broadcasting: 1\nI0505 21:39:55.996493 2508 log.go:172] (0xc0009e6d10) Go away received\nI0505 21:39:55.996724 2508 log.go:172] (0xc0009e6d10) (0xc0009e40a0) Stream removed, broadcasting: 1\nI0505 21:39:55.996736 2508 log.go:172] (0xc0009e6d10) (0xc000abc000) Stream removed, broadcasting: 3\nI0505 21:39:55.996741 2508 log.go:172] (0xc0009e6d10) (0xc000abc0a0) Stream removed, broadcasting: 5\n" May 5 21:39:56.001: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:39:56.001: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:39:56.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3392 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 21:39:56.212: INFO: stderr: "I0505 21:39:56.128296 2529 log.go:172] (0xc0000f5290) (0xc0004455e0) Create stream\nI0505 21:39:56.128365 2529 log.go:172] (0xc0000f5290) (0xc0004455e0) Stream added, broadcasting: 1\nI0505 21:39:56.131185 2529 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0505 21:39:56.131238 2529 log.go:172] (0xc0000f5290) (0xc0008de000) Create stream\nI0505 21:39:56.131260 2529 log.go:172] (0xc0000f5290) (0xc0008de000) Stream added, broadcasting: 3\nI0505 21:39:56.132309 2529 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0505 21:39:56.132334 2529 log.go:172] (0xc0000f5290) (0xc0008bc000) Create stream\nI0505 21:39:56.132342 2529 log.go:172] (0xc0000f5290) (0xc0008bc000) Stream added, broadcasting: 5\nI0505 21:39:56.133322 2529 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0505 21:39:56.204674 2529 log.go:172] (0xc0000f5290) Data frame received for 5\nI0505 21:39:56.204708 2529 log.go:172] (0xc0008bc000) (5) Data frame handling\nI0505 21:39:56.204718 2529 log.go:172] (0xc0008bc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 21:39:56.204799 2529 log.go:172] (0xc0000f5290) Data frame received for 3\nI0505 21:39:56.204850 2529 log.go:172] (0xc0008de000) (3) Data frame handling\nI0505 21:39:56.204868 2529 log.go:172] (0xc0008de000) (3) Data frame sent\nI0505 21:39:56.204881 2529 log.go:172] (0xc0000f5290) Data frame received for 3\nI0505 21:39:56.204906 2529 log.go:172] (0xc0008de000) (3) Data frame handling\nI0505 21:39:56.204927 2529 log.go:172] (0xc0000f5290) Data frame received for 5\nI0505 21:39:56.204949 2529 log.go:172] (0xc0008bc000) (5) Data frame handling\nI0505 21:39:56.206841 2529 log.go:172] (0xc0000f5290) Data frame received for 1\nI0505 21:39:56.206875 2529 log.go:172] (0xc0004455e0) (1) Data frame handling\nI0505 21:39:56.206906 2529 log.go:172] (0xc0004455e0) (1) Data frame sent\nI0505 21:39:56.206930 2529 log.go:172] (0xc0000f5290) (0xc0004455e0) Stream removed, broadcasting: 1\nI0505 21:39:56.207071 2529 log.go:172] (0xc0000f5290) Go away received\nI0505 21:39:56.207315 2529 log.go:172] (0xc0000f5290) (0xc0004455e0) Stream removed, broadcasting: 1\nI0505 21:39:56.207335 2529 log.go:172] (0xc0000f5290) (0xc0008de000) Stream removed, broadcasting: 3\nI0505 21:39:56.207347 2529 log.go:172] (0xc0000f5290) (0xc0008bc000) Stream removed, broadcasting: 5\n" May 5 21:39:56.212: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 21:39:56.212: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 21:39:56.212: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 21:40:26.228: INFO: Deleting all statefulset in ns statefulset-3392 May 5 21:40:26.232: INFO: Scaling statefulset ss to 0 May 5 21:40:26.241: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:40:26.244: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:40:26.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3392" for this suite. • [SLOW TEST:92.376 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":138,"skipped":2211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:40:26.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4060 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4060 I0505 21:40:26.546955 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4060, replica count: 2 I0505 21:40:29.597572 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 21:40:32.597817 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 21:40:32.597: INFO: Creating new exec pod May 5 21:40:37.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4060 execpodz2kxv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 5 21:40:37.858: INFO: stderr: "I0505 21:40:37.773263 2550 log.go:172] (0xc00091ce70) (0xc00098a3c0) Create stream\nI0505 21:40:37.773316 2550 log.go:172] (0xc00091ce70) (0xc00098a3c0) Stream added, broadcasting: 1\nI0505 21:40:37.777506 2550 log.go:172] (0xc00091ce70) Reply frame received for 1\nI0505 21:40:37.777535 2550 log.go:172] (0xc00091ce70) (0xc0005efa40) Create stream\nI0505 21:40:37.777543 2550 log.go:172] (0xc00091ce70) (0xc0005efa40) Stream added, broadcasting: 3\nI0505 21:40:37.778572 2550 log.go:172] (0xc00091ce70) Reply frame received for 3\nI0505 21:40:37.778609 2550 log.go:172] (0xc00091ce70) (0xc0005a8640) Create stream\nI0505 21:40:37.778618 2550 log.go:172] (0xc00091ce70) (0xc0005a8640) Stream added, broadcasting: 5\nI0505 21:40:37.779463 2550 log.go:172] (0xc00091ce70) Reply frame received for 5\nI0505 21:40:37.829686 2550 log.go:172] (0xc00091ce70) Data frame received for 5\nI0505 21:40:37.829733 2550 log.go:172] (0xc0005a8640) (5) Data frame handling\nI0505 21:40:37.829756 2550 log.go:172] (0xc0005a8640) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0505 21:40:37.840012 2550 log.go:172] (0xc00091ce70) Data frame received for 5\nI0505 21:40:37.840049 2550 log.go:172] (0xc0005a8640) (5) Data frame handling\nI0505 21:40:37.840080 2550 log.go:172] (0xc0005a8640) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0505 21:40:37.840298 2550 log.go:172] (0xc00091ce70) Data frame received for 5\nI0505 21:40:37.840315 2550 log.go:172] (0xc0005a8640) (5) Data frame handling\nI0505 21:40:37.840512 2550 log.go:172] (0xc00091ce70) Data frame received for 3\nI0505 21:40:37.840544 2550 log.go:172] (0xc0005efa40) (3) Data frame handling\nI0505 21:40:37.853195 2550 log.go:172] (0xc00091ce70) Data frame received for 1\nI0505 21:40:37.853279 2550 log.go:172] (0xc00098a3c0) (1) Data frame handling\nI0505 21:40:37.853295 2550 log.go:172] (0xc00098a3c0) (1) Data frame sent\nI0505 21:40:37.853308 2550 log.go:172] (0xc00091ce70) (0xc00098a3c0) Stream removed, broadcasting: 1\nI0505 21:40:37.853386 2550 log.go:172] (0xc00091ce70) Go away received\nI0505 21:40:37.853635 2550 log.go:172] (0xc00091ce70) (0xc00098a3c0) Stream removed, broadcasting: 1\nI0505 21:40:37.853657 2550 log.go:172] (0xc00091ce70) (0xc0005efa40) Stream removed, broadcasting: 3\nI0505 21:40:37.853668 2550 log.go:172] (0xc00091ce70) (0xc0005a8640) Stream removed, broadcasting: 5\n" May 5 21:40:37.858: INFO: stdout: "" May 5 21:40:37.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4060 execpodz2kxv -- /bin/sh -x -c nc -zv -t -w 2 10.103.118.144 80' May 5 21:40:38.064: INFO: stderr: "I0505 21:40:37.981732 2571 log.go:172] (0xc00059ed10) (0xc0006efcc0) Create stream\nI0505 21:40:37.981793 2571 log.go:172] (0xc00059ed10) (0xc0006efcc0) Stream added, broadcasting: 1\nI0505 21:40:37.984583 2571 log.go:172] (0xc00059ed10) Reply frame received for 1\nI0505 21:40:37.984638 2571 log.go:172] (0xc00059ed10) (0xc0006efd60) Create stream\nI0505 21:40:37.984666 2571 log.go:172] (0xc00059ed10) (0xc0006efd60) Stream added, broadcasting: 3\nI0505 21:40:37.985849 2571 log.go:172] (0xc00059ed10) Reply frame received for 3\nI0505 21:40:37.985878 2571 log.go:172] (0xc00059ed10) (0xc0006885a0) Create stream\nI0505 21:40:37.985887 2571 log.go:172] (0xc00059ed10) (0xc0006885a0) Stream added, broadcasting: 5\nI0505 21:40:37.986814 2571 log.go:172] (0xc00059ed10) Reply frame received for 5\nI0505 21:40:38.057767 2571 log.go:172] (0xc00059ed10) Data frame received for 3\nI0505 21:40:38.057799 2571 log.go:172] (0xc0006efd60) (3) Data frame handling\nI0505 21:40:38.057822 2571 log.go:172] (0xc00059ed10) Data frame received for 5\nI0505 21:40:38.057831 2571 log.go:172] (0xc0006885a0) (5) Data frame handling\nI0505 21:40:38.057841 2571 log.go:172] (0xc0006885a0) (5) Data frame sent\nI0505 21:40:38.057850 2571 log.go:172] (0xc00059ed10) Data frame received for 5\nI0505 21:40:38.057864 2571 log.go:172] (0xc0006885a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.118.144 80\nConnection to 10.103.118.144 80 port [tcp/http] succeeded!\nI0505 21:40:38.059425 2571 log.go:172] (0xc00059ed10) Data frame received for 1\nI0505 21:40:38.059444 2571 log.go:172] (0xc0006efcc0) (1) Data frame handling\nI0505 21:40:38.059456 2571 log.go:172] (0xc0006efcc0) (1) Data frame sent\nI0505 21:40:38.059554 2571 log.go:172] (0xc00059ed10) (0xc0006efcc0) Stream removed, broadcasting: 1\nI0505 21:40:38.059584 2571 log.go:172] (0xc00059ed10) Go away received\nI0505 21:40:38.059931 2571 log.go:172] (0xc00059ed10) (0xc0006efcc0) Stream removed, broadcasting: 1\nI0505 21:40:38.059947 2571 log.go:172] (0xc00059ed10) (0xc0006efd60) Stream removed, broadcasting: 3\nI0505 21:40:38.059953 2571 log.go:172] (0xc00059ed10) (0xc0006885a0) Stream removed, broadcasting: 5\n" May 5 21:40:38.064: INFO: stdout: "" May 5 21:40:38.064: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:40:38.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4060" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.855 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":139,"skipped":2262,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:40:38.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-943 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 5 21:40:38.652: INFO: Found 0 stateful pods, waiting for 3 May 5 21:40:48.656: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 21:40:48.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 21:40:48.657: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 5 21:40:58.657: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 21:40:58.657: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 21:40:58.657: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 5 21:40:58.684: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 5 21:41:08.744: INFO: Updating stateful set ss2 May 5 21:41:08.799: INFO: Waiting for Pod statefulset-943/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:41:18.807: INFO: Waiting for Pod statefulset-943/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 5 21:41:28.975: INFO: Found 2 stateful pods, waiting for 3 May 5 21:41:38.979: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 5 21:41:38.979: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 5 21:41:38.979: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 5 21:41:39.003: INFO: Updating stateful set ss2 May 5 21:41:39.057: INFO: Waiting for Pod statefulset-943/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:41:49.066: INFO: Waiting for Pod statefulset-943/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 5 21:41:59.083: INFO: Updating stateful set ss2 May 5 21:41:59.110: INFO: Waiting for StatefulSet statefulset-943/ss2 to complete update May 5 21:41:59.110: INFO: Waiting for Pod statefulset-943/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 21:42:10.106: INFO: Deleting all statefulset in ns statefulset-943 May 5 21:42:10.111: INFO: Scaling statefulset ss2 to 0 May 5 21:42:40.230: INFO: Waiting for statefulset status.replicas updated to 0 May 5 21:42:40.233: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:42:40.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-943" for this suite. • [SLOW TEST:122.139 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":140,"skipped":2263,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:42:40.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:42:51.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7868" for this suite. • [SLOW TEST:11.279 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":141,"skipped":2267,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:42:51.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-aa1b5afe-5783-44b5-9294-98334b7cd91a STEP: Creating a pod to test consume secrets May 5 21:42:51.609: INFO: Waiting up to 5m0s for pod "pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38" in namespace "secrets-8471" to be "success or failure" May 5 21:42:51.613: INFO: Pod "pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38": Phase="Pending", Reason="", readiness=false. Elapsed: 3.672745ms May 5 21:42:53.656: INFO: Pod "pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047091061s May 5 21:42:55.680: INFO: Pod "pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070721325s STEP: Saw pod success May 5 21:42:55.680: INFO: Pod "pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38" satisfied condition "success or failure" May 5 21:42:55.683: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38 container secret-volume-test: STEP: delete the pod May 5 21:42:55.743: INFO: Waiting for pod pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38 to disappear May 5 21:42:55.751: INFO: Pod pod-secrets-c9ba3c6c-3edb-4db7-854e-e5c9d3c5ee38 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:42:55.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8471" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2271,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:42:55.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e89d5142-965a-4ef1-91df-6d3fa5f0ca26 STEP: Creating a pod to test consume configMaps May 5 21:42:55.860: INFO: Waiting up to 5m0s for pod "pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9" in namespace "configmap-3318" to be "success or failure" May 5 21:42:55.865: INFO: Pod "pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384363ms May 5 21:42:57.868: INFO: Pod "pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007590356s May 5 21:42:59.872: INFO: Pod "pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011671297s STEP: Saw pod success May 5 21:42:59.872: INFO: Pod "pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9" satisfied condition "success or failure" May 5 21:42:59.874: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9 container configmap-volume-test: STEP: delete the pod May 5 21:42:59.896: INFO: Waiting for pod pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9 to disappear May 5 21:42:59.901: INFO: Pod pod-configmaps-5cf5481f-7880-4eca-8bf3-3abf5db5a6b9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:42:59.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3318" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2273,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:42:59.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:43:00.010: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:01.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9035" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":144,"skipped":2291,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:01.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 5 21:43:01.430: INFO: Waiting up to 5m0s for pod "var-expansion-27d71661-2622-45fa-9d2f-9272331d785d" in namespace "var-expansion-6232" to be "success or failure" May 5 21:43:01.585: INFO: Pod "var-expansion-27d71661-2622-45fa-9d2f-9272331d785d": Phase="Pending", Reason="", readiness=false. Elapsed: 154.52364ms May 5 21:43:03.589: INFO: Pod "var-expansion-27d71661-2622-45fa-9d2f-9272331d785d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15847614s May 5 21:43:05.593: INFO: Pod "var-expansion-27d71661-2622-45fa-9d2f-9272331d785d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162463361s STEP: Saw pod success May 5 21:43:05.593: INFO: Pod "var-expansion-27d71661-2622-45fa-9d2f-9272331d785d" satisfied condition "success or failure" May 5 21:43:05.596: INFO: Trying to get logs from node jerma-worker pod var-expansion-27d71661-2622-45fa-9d2f-9272331d785d container dapi-container: STEP: delete the pod May 5 21:43:05.629: INFO: Waiting for pod var-expansion-27d71661-2622-45fa-9d2f-9272331d785d to disappear May 5 21:43:05.634: INFO: Pod var-expansion-27d71661-2622-45fa-9d2f-9272331d785d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:05.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6232" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2292,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:05.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 5 21:43:05.762: INFO: Waiting up to 5m0s for pod "pod-85f04a36-9bba-4bd8-8fce-d4873045a893" in namespace "emptydir-1029" to be "success or failure" May 5 21:43:05.766: INFO: Pod "pod-85f04a36-9bba-4bd8-8fce-d4873045a893": Phase="Pending", Reason="", readiness=false. Elapsed: 3.801711ms May 5 21:43:07.769: INFO: Pod "pod-85f04a36-9bba-4bd8-8fce-d4873045a893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007240755s May 5 21:43:09.774: INFO: Pod "pod-85f04a36-9bba-4bd8-8fce-d4873045a893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011497079s STEP: Saw pod success May 5 21:43:09.774: INFO: Pod "pod-85f04a36-9bba-4bd8-8fce-d4873045a893" satisfied condition "success or failure" May 5 21:43:09.777: INFO: Trying to get logs from node jerma-worker pod pod-85f04a36-9bba-4bd8-8fce-d4873045a893 container test-container: STEP: delete the pod May 5 21:43:09.796: INFO: Waiting for pod pod-85f04a36-9bba-4bd8-8fce-d4873045a893 to disappear May 5 21:43:09.799: INFO: Pod pod-85f04a36-9bba-4bd8-8fce-d4873045a893 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:09.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1029" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:09.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:43:09.918: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 5 21:43:11.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 create -f -' May 5 21:43:17.750: INFO: stderr: "" May 5 21:43:17.750: INFO: stdout: "e2e-test-crd-publish-openapi-7027-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 5 21:43:17.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 delete e2e-test-crd-publish-openapi-7027-crds test-foo' May 5 21:43:17.870: INFO: stderr: "" May 5 21:43:17.870: INFO: stdout: "e2e-test-crd-publish-openapi-7027-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 5 21:43:17.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 apply -f -' May 5 21:43:18.109: INFO: stderr: "" May 5 21:43:18.109: INFO: stdout: "e2e-test-crd-publish-openapi-7027-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 5 21:43:18.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 delete e2e-test-crd-publish-openapi-7027-crds test-foo' May 5 21:43:18.237: INFO: stderr: "" May 5 21:43:18.237: INFO: stdout: "e2e-test-crd-publish-openapi-7027-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 5 21:43:18.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 create -f -' May 5 21:43:18.508: INFO: rc: 1 May 5 21:43:18.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 apply -f -' May 5 21:43:18.735: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 5 21:43:18.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 create -f -' May 5 21:43:18.962: INFO: rc: 1 May 5 21:43:18.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5208 apply -f -' May 5 21:43:19.198: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 5 21:43:19.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7027-crds' May 5 21:43:19.434: INFO: stderr: "" May 5 21:43:19.434: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7027-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 5 21:43:19.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7027-crds.metadata' May 5 21:43:19.667: INFO: stderr: "" May 5 21:43:19.667: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7027-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 5 21:43:19.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7027-crds.spec' May 5 21:43:19.888: INFO: stderr: "" May 5 21:43:19.888: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7027-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 5 21:43:19.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7027-crds.spec.bars' May 5 21:43:20.116: INFO: stderr: "" May 5 21:43:20.116: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7027-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 5 21:43:20.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7027-crds.spec.bars2' May 5 21:43:20.384: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:22.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5208" for this suite. • [SLOW TEST:12.472 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":147,"skipped":2339,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:22.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-tsxx STEP: Creating a pod to test atomic-volume-subpath May 5 21:43:22.399: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tsxx" in namespace "subpath-9567" to be "success or failure" May 5 21:43:22.441: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Pending", Reason="", readiness=false. Elapsed: 41.412023ms May 5 21:43:24.639: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239637154s May 5 21:43:26.643: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 4.243575941s May 5 21:43:28.648: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 6.248137503s May 5 21:43:30.652: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 8.252464697s May 5 21:43:32.656: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 10.256238768s May 5 21:43:34.660: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 12.260543397s May 5 21:43:36.665: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 14.265068907s May 5 21:43:38.668: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 16.268895833s May 5 21:43:40.672: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 18.272983721s May 5 21:43:42.677: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 20.27710602s May 5 21:43:44.681: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Running", Reason="", readiness=true. Elapsed: 22.281417069s May 5 21:43:46.685: INFO: Pod "pod-subpath-test-secret-tsxx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.285829223s STEP: Saw pod success May 5 21:43:46.685: INFO: Pod "pod-subpath-test-secret-tsxx" satisfied condition "success or failure" May 5 21:43:46.688: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-tsxx container test-container-subpath-secret-tsxx: STEP: delete the pod May 5 21:43:46.715: INFO: Waiting for pod pod-subpath-test-secret-tsxx to disappear May 5 21:43:46.719: INFO: Pod pod-subpath-test-secret-tsxx no longer exists STEP: Deleting pod pod-subpath-test-secret-tsxx May 5 21:43:46.719: INFO: Deleting pod "pod-subpath-test-secret-tsxx" in namespace "subpath-9567" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:46.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9567" for this suite. • [SLOW TEST:24.448 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":148,"skipped":2352,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:46.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9a287558-06e6-4eed-a014-940219d246e9 STEP: Creating a pod to test consume configMaps May 5 21:43:46.826: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8" in namespace "projected-2207" to be "success or failure" May 5 21:43:46.839: INFO: Pod "pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.183866ms May 5 21:43:48.842: INFO: Pod "pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016284141s May 5 21:43:50.854: INFO: Pod "pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027906909s STEP: Saw pod success May 5 21:43:50.854: INFO: Pod "pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8" satisfied condition "success or failure" May 5 21:43:50.857: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8 container projected-configmap-volume-test: STEP: delete the pod May 5 21:43:50.889: INFO: Waiting for pod pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8 to disappear May 5 21:43:50.920: INFO: Pod pod-projected-configmaps-52b66fbb-5cad-447e-8d3e-c44227777bb8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:43:50.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2207" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:43:50.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-gvjd STEP: Creating a pod to test atomic-volume-subpath May 5 21:43:51.269: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gvjd" in namespace "subpath-635" to be "success or failure" May 5 21:43:51.275: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.776608ms May 5 21:43:53.278: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009158126s May 5 21:43:55.282: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 4.013405109s May 5 21:43:57.286: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 6.017272274s May 5 21:43:59.291: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 8.021815391s May 5 21:44:01.294: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 10.025345647s May 5 21:44:03.299: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 12.029815839s May 5 21:44:05.303: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 14.034459719s May 5 21:44:07.308: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 16.038965418s May 5 21:44:09.312: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 18.043187405s May 5 21:44:11.317: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 20.047983672s May 5 21:44:13.321: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Running", Reason="", readiness=true. Elapsed: 22.052420518s May 5 21:44:15.325: INFO: Pod "pod-subpath-test-downwardapi-gvjd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056534167s STEP: Saw pod success May 5 21:44:15.325: INFO: Pod "pod-subpath-test-downwardapi-gvjd" satisfied condition "success or failure" May 5 21:44:15.328: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-gvjd container test-container-subpath-downwardapi-gvjd: STEP: delete the pod May 5 21:44:15.401: INFO: Waiting for pod pod-subpath-test-downwardapi-gvjd to disappear May 5 21:44:15.413: INFO: Pod pod-subpath-test-downwardapi-gvjd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gvjd May 5 21:44:15.413: INFO: Deleting pod "pod-subpath-test-downwardapi-gvjd" in namespace "subpath-635" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:44:15.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-635" for this suite. • [SLOW TEST:24.493 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":150,"skipped":2392,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:44:15.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-64a5a87b-d1bf-4d67-a93f-dfc204a2f9af STEP: Creating a pod to test consume configMaps May 5 21:44:15.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702" in namespace "configmap-3235" to be "success or failure" May 5 21:44:15.525: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702": Phase="Pending", Reason="", readiness=false. Elapsed: 43.879257ms May 5 21:44:17.804: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.322992988s May 5 21:44:19.809: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327135289s May 5 21:44:21.831: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702": Phase="Running", Reason="", readiness=true. Elapsed: 6.349283356s May 5 21:44:23.834: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.353042938s STEP: Saw pod success May 5 21:44:23.835: INFO: Pod "pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702" satisfied condition "success or failure" May 5 21:44:23.837: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702 container configmap-volume-test: STEP: delete the pod May 5 21:44:23.856: INFO: Waiting for pod pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702 to disappear May 5 21:44:23.860: INFO: Pod pod-configmaps-71f7007e-7872-4f6d-8f55-0042c95f3702 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:44:23.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3235" for this suite. • [SLOW TEST:8.444 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2401,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:44:23.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 5 21:44:24.000: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:44:38.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-921" for this suite. • [SLOW TEST:14.170 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":152,"skipped":2409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:44:38.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-3d78e7e1-8517-4248-8d2c-d834f47df0fd STEP: Creating a pod to test consume secrets May 5 21:44:38.462: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3" in namespace "projected-5856" to be "success or failure" May 5 21:44:38.484: INFO: Pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.253191ms May 5 21:44:40.903: INFO: Pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.441218876s May 5 21:44:42.907: INFO: Pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.445307672s May 5 21:44:45.008: INFO: Pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.545953923s STEP: Saw pod success May 5 21:44:45.008: INFO: Pod "pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3" satisfied condition "success or failure" May 5 21:44:45.010: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3 container secret-volume-test: STEP: delete the pod May 5 21:44:45.068: INFO: Waiting for pod pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3 to disappear May 5 21:44:45.179: INFO: Pod pod-projected-secrets-43e3420c-0a5f-4aea-903a-dc36d1c675a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:44:45.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5856" for this suite. • [SLOW TEST:7.151 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2442,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:44:45.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:44:45.566: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876" in namespace "projected-430" to be "success or failure" May 5 21:44:45.760: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876": Phase="Pending", Reason="", readiness=false. Elapsed: 193.021006ms May 5 21:44:47.764: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876": Phase="Pending", Reason="", readiness=false. Elapsed: 2.197768059s May 5 21:44:49.783: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216770928s May 5 21:44:51.987: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420820649s May 5 21:44:54.255: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.68807175s STEP: Saw pod success May 5 21:44:54.255: INFO: Pod "downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876" satisfied condition "success or failure" May 5 21:44:54.401: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876 container client-container: STEP: delete the pod May 5 21:44:54.969: INFO: Waiting for pod downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876 to disappear May 5 21:44:54.971: INFO: Pod downwardapi-volume-a6dcc1b2-f2f4-4961-84fc-aebd15a5c876 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:44:54.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-430" for this suite. • [SLOW TEST:9.788 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2450,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:44:54.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:45:17.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7709" for this suite. • [SLOW TEST:22.254 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":155,"skipped":2454,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:45:17.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-14c0eb9a-4c89-42be-b16f-259c71a70714 STEP: Creating a pod to test consume configMaps May 5 21:45:17.330: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a" in namespace "projected-7448" to be "success or failure" May 5 21:45:17.334: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.746282ms May 5 21:45:19.338: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007380553s May 5 21:45:21.341: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011129997s May 5 21:45:23.451: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120513512s May 5 21:45:25.455: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.124339337s May 5 21:45:27.524: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.193407397s May 5 21:45:29.526: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.196023682s May 5 21:45:31.530: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.199440588s May 5 21:45:33.533: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.202437501s STEP: Saw pod success May 5 21:45:33.533: INFO: Pod "pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a" satisfied condition "success or failure" May 5 21:45:33.535: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a container projected-configmap-volume-test: STEP: delete the pod May 5 21:45:33.914: INFO: Waiting for pod pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a to disappear May 5 21:45:34.116: INFO: Pod pod-projected-configmaps-02d16109-f9fe-4744-8231-72ee8c04144a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:45:34.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7448" for this suite. • [SLOW TEST:17.305 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2479,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:45:34.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 5 21:45:42.687: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 5 21:45:52.774: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:45:52.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-589" for this suite. • [SLOW TEST:18.247 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":157,"skipped":2496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:45:52.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:45:52.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-6831" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":158,"skipped":2519,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:45:52.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a868958a-bc03-4f2f-8b28-37d8224d8cb2 STEP: Creating a pod to test consume secrets May 5 21:45:53.169: INFO: Waiting up to 5m0s for pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c" in namespace "secrets-481" to be "success or failure" May 5 21:45:53.206: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.119526ms May 5 21:45:55.557: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388124128s May 5 21:45:57.560: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.391326395s May 5 21:45:59.605: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435703876s May 5 21:46:01.688: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.519201921s May 5 21:46:03.691: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.522480354s May 5 21:46:06.012: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.842931326s May 5 21:46:08.149: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.980174365s May 5 21:46:10.152: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.983197601s STEP: Saw pod success May 5 21:46:10.152: INFO: Pod "pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c" satisfied condition "success or failure" May 5 21:46:10.155: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c container secret-volume-test: STEP: delete the pod May 5 21:46:10.199: INFO: Waiting for pod pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c to disappear May 5 21:46:10.212: INFO: Pod pod-secrets-6dc25e03-6e06-46e7-abc8-4367899a304c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:10.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-481" for this suite. STEP: Destroying namespace "secret-namespace-6474" for this suite. • [SLOW TEST:17.310 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2526,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 5 21:46:10.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 5 21:46:10.392: INFO: stderr: "" May 5 21:46:10.392: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:10.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6990" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":160,"skipped":2536,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:10.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 5 21:46:17.950: INFO: 5 pods remaining May 5 21:46:17.950: INFO: 0 pods has nil DeletionTimestamp May 5 21:46:17.950: INFO: STEP: Gathering metrics W0505 21:46:19.076170 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 21:46:19.076: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:19.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-400" for this suite. • [SLOW TEST:9.308 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":161,"skipped":2552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:19.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 5 21:46:29.975: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4703 PodName:pod-sharedvolume-30ff57a2-970c-4a41-a9c0-0be5507e4111 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:46:29.975: INFO: >>> kubeConfig: /root/.kube/config I0505 21:46:30.014964 7 log.go:172] (0xc0006226e0) (0xc001a6b400) Create stream I0505 21:46:30.015014 7 log.go:172] (0xc0006226e0) (0xc001a6b400) Stream added, broadcasting: 1 I0505 21:46:30.016992 7 log.go:172] (0xc0006226e0) Reply frame received for 1 I0505 21:46:30.017029 7 log.go:172] (0xc0006226e0) (0xc0024ca780) Create stream I0505 21:46:30.017041 7 log.go:172] (0xc0006226e0) (0xc0024ca780) Stream added, broadcasting: 3 I0505 21:46:30.017991 7 log.go:172] (0xc0006226e0) Reply frame received for 3 I0505 21:46:30.018023 7 log.go:172] (0xc0006226e0) (0xc0024ca820) Create stream I0505 21:46:30.018035 7 log.go:172] (0xc0006226e0) (0xc0024ca820) Stream added, broadcasting: 5 I0505 21:46:30.018967 7 log.go:172] (0xc0006226e0) Reply frame received for 5 I0505 21:46:30.089769 7 log.go:172] (0xc0006226e0) Data frame received for 5 I0505 21:46:30.089816 7 log.go:172] (0xc0024ca820) (5) Data frame handling I0505 21:46:30.089892 7 log.go:172] (0xc0006226e0) Data frame received for 3 I0505 21:46:30.089921 7 log.go:172] (0xc0024ca780) (3) Data frame handling I0505 21:46:30.089940 7 log.go:172] (0xc0024ca780) (3) Data frame sent I0505 21:46:30.089957 7 log.go:172] (0xc0006226e0) Data frame received for 3 I0505 21:46:30.089972 7 log.go:172] (0xc0024ca780) (3) Data frame handling I0505 21:46:30.091383 7 log.go:172] (0xc0006226e0) Data frame received for 1 I0505 21:46:30.091404 7 log.go:172] (0xc001a6b400) (1) Data frame handling I0505 21:46:30.091417 7 log.go:172] (0xc001a6b400) (1) Data frame sent I0505 21:46:30.091581 7 log.go:172] (0xc0006226e0) (0xc001a6b400) Stream removed, broadcasting: 1 I0505 21:46:30.091700 7 log.go:172] (0xc0006226e0) (0xc001a6b400) Stream removed, broadcasting: 1 I0505 21:46:30.091727 7 log.go:172] (0xc0006226e0) Go away received I0505 21:46:30.091759 7 log.go:172] (0xc0006226e0) (0xc0024ca780) Stream removed, broadcasting: 3 I0505 21:46:30.091785 7 log.go:172] (0xc0006226e0) (0xc0024ca820) Stream removed, broadcasting: 5 May 5 21:46:30.091: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:30.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4703" for this suite. • [SLOW TEST:10.366 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":162,"skipped":2582,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:30.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:46:30.209: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c" in namespace "downward-api-6148" to be "success or failure" May 5 21:46:30.241: INFO: Pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.875294ms May 5 21:46:32.472: INFO: Pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.26339791s May 5 21:46:34.476: INFO: Pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c": Phase="Running", Reason="", readiness=true. Elapsed: 4.26687148s May 5 21:46:36.479: INFO: Pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.270271423s STEP: Saw pod success May 5 21:46:36.479: INFO: Pod "downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c" satisfied condition "success or failure" May 5 21:46:36.482: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c container client-container: STEP: delete the pod May 5 21:46:36.499: INFO: Waiting for pod downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c to disappear May 5 21:46:36.503: INFO: Pod downwardapi-volume-fa682390-e921-4782-9c8d-5c01deb1f77c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:36.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6148" for this suite. • [SLOW TEST:6.412 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2588,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:36.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:46:37.102: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:46:41.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:46:43.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724311997, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:46:46.084: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:46:46.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5321" for this suite. STEP: Destroying namespace "webhook-5321-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.772 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":164,"skipped":2588,"failed":0} [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:46:46.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 5 21:46:46.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5435' May 5 21:46:46.434: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 5 21:46:46.434: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: rolling-update to same image controller May 5 21:46:46.473: INFO: scanned /root for discovery docs: May 5 21:46:46.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5435' May 5 21:47:21.256: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 5 21:47:21.256: INFO: stdout: "Created e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21\nScaling up e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 5 21:47:21.256: INFO: stdout: "Created e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21\nScaling up e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 5 21:47:21.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5435' May 5 21:47:21.349: INFO: stderr: "" May 5 21:47:21.349: INFO: stdout: "e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21-wkt6x e2e-test-httpd-rc-8sbwr " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 May 5 21:47:26.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5435' May 5 21:47:26.441: INFO: stderr: "" May 5 21:47:26.441: INFO: stdout: "e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21-wkt6x " May 5 21:47:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21-wkt6x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5435' May 5 21:47:26.537: INFO: stderr: "" May 5 21:47:26.537: INFO: stdout: "true" May 5 21:47:26.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21-wkt6x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5435' May 5 21:47:26.626: INFO: stderr: "" May 5 21:47:26.626: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 5 21:47:26.626: INFO: e2e-test-httpd-rc-4a3b2888ef530c66679faa172a073d21-wkt6x is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 5 21:47:26.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5435' May 5 21:47:26.740: INFO: stderr: "" May 5 21:47:26.740: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:47:26.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5435" for this suite. • [SLOW TEST:40.464 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":165,"skipped":2588,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:47:26.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c1f11ff4-eda7-4bd9-b0eb-84158b283fc8 STEP: Creating a pod to test consume configMaps May 5 21:47:26.833: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5" in namespace "configmap-6775" to be "success or failure" May 5 21:47:26.838: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.504869ms May 5 21:47:28.840: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006918895s May 5 21:47:30.844: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010489906s May 5 21:47:33.116: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5": Phase="Running", Reason="", readiness=true. Elapsed: 6.282003316s May 5 21:47:35.120: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.286037964s STEP: Saw pod success May 5 21:47:35.120: INFO: Pod "pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5" satisfied condition "success or failure" May 5 21:47:35.122: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5 container configmap-volume-test: STEP: delete the pod May 5 21:47:35.346: INFO: Waiting for pod pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5 to disappear May 5 21:47:35.803: INFO: Pod pod-configmaps-b2465c50-6006-4baa-89ac-8625b48de4a5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:47:35.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6775" for this suite. • [SLOW TEST:9.668 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:47:36.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 5 21:47:36.567: INFO: Waiting up to 5m0s for pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899" in namespace "emptydir-3987" to be "success or failure" May 5 21:47:36.614: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899": Phase="Pending", Reason="", readiness=false. Elapsed: 46.797202ms May 5 21:47:38.618: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050558291s May 5 21:47:40.857: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289919666s May 5 21:47:42.860: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899": Phase="Running", Reason="", readiness=true. Elapsed: 6.292783716s May 5 21:47:45.205: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.637413424s STEP: Saw pod success May 5 21:47:45.205: INFO: Pod "pod-d3d3e312-6231-4932-bae9-4d63e025d899" satisfied condition "success or failure" May 5 21:47:45.208: INFO: Trying to get logs from node jerma-worker2 pod pod-d3d3e312-6231-4932-bae9-4d63e025d899 container test-container: STEP: delete the pod May 5 21:47:45.679: INFO: Waiting for pod pod-d3d3e312-6231-4932-bae9-4d63e025d899 to disappear May 5 21:47:45.703: INFO: Pod pod-d3d3e312-6231-4932-bae9-4d63e025d899 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:47:45.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3987" for this suite. • [SLOW TEST:9.294 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2630,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:47:45.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:47:45.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402" in namespace "downward-api-5099" to be "success or failure" May 5 21:47:45.803: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 9.70655ms May 5 21:47:47.839: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046118296s May 5 21:47:49.843: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049671243s May 5 21:47:51.845: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052288306s May 5 21:47:53.893: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100327846s May 5 21:47:55.897: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104170584s May 5 21:47:58.193: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Running", Reason="", readiness=true. Elapsed: 12.400139408s May 5 21:48:00.196: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Running", Reason="", readiness=true. Elapsed: 14.403157954s May 5 21:48:02.235: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Running", Reason="", readiness=true. Elapsed: 16.441874067s May 5 21:48:04.238: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.444538973s STEP: Saw pod success May 5 21:48:04.238: INFO: Pod "downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402" satisfied condition "success or failure" May 5 21:48:04.241: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402 container client-container: STEP: delete the pod May 5 21:48:04.367: INFO: Waiting for pod downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402 to disappear May 5 21:48:04.384: INFO: Pod downwardapi-volume-612698a2-1808-4bd5-af6d-037d2be23402 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:04.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5099" for this suite. • [SLOW TEST:18.680 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:04.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:48:04.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994" in namespace "projected-8340" to be "success or failure" May 5 21:48:04.491: INFO: Pod "downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994": Phase="Pending", Reason="", readiness=false. Elapsed: 30.855359ms May 5 21:48:06.628: INFO: Pod "downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167452162s May 5 21:48:08.631: INFO: Pod "downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.170035589s STEP: Saw pod success May 5 21:48:08.631: INFO: Pod "downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994" satisfied condition "success or failure" May 5 21:48:08.632: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994 container client-container: STEP: delete the pod May 5 21:48:08.644: INFO: Waiting for pod downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994 to disappear May 5 21:48:08.649: INFO: Pod downwardapi-volume-5dcd43cd-b2d9-4bce-afb7-0a64fe179994 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:08.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8340" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2679,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:08.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-7619/configmap-test-39f1faf5-aca9-46cf-b483-e7d5cb7e76fa STEP: Creating a pod to test consume configMaps May 5 21:48:08.752: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef" in namespace "configmap-7619" to be "success or failure" May 5 21:48:08.763: INFO: Pod "pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef": Phase="Pending", Reason="", readiness=false. Elapsed: 11.065023ms May 5 21:48:10.781: INFO: Pod "pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028641262s May 5 21:48:12.822: INFO: Pod "pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069756596s STEP: Saw pod success May 5 21:48:12.822: INFO: Pod "pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef" satisfied condition "success or failure" May 5 21:48:12.825: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef container env-test: STEP: delete the pod May 5 21:48:13.069: INFO: Waiting for pod pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef to disappear May 5 21:48:13.087: INFO: Pod pod-configmaps-c9b3e5da-72ad-4df0-a1fd-d66430c62cef no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:13.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7619" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2691,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:13.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6475" for this suite. • [SLOW TEST:6.230 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2695,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:19.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 5 21:48:23.973: INFO: Successfully updated pod "pod-update-4c28589a-710c-4ef7-9767-d9763859c00b" STEP: verifying the updated pod is in kubernetes May 5 21:48:23.984: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:23.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3296" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:48:24.047: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 5 21:48:24.054: INFO: Number of nodes with available pods: 0 May 5 21:48:24.055: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 5 21:48:24.141: INFO: Number of nodes with available pods: 0 May 5 21:48:24.141: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:25.146: INFO: Number of nodes with available pods: 0 May 5 21:48:25.146: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:26.146: INFO: Number of nodes with available pods: 0 May 5 21:48:26.146: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:27.145: INFO: Number of nodes with available pods: 0 May 5 21:48:27.145: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:28.146: INFO: Number of nodes with available pods: 0 May 5 21:48:28.146: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:29.169: INFO: Number of nodes with available pods: 1 May 5 21:48:29.169: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 5 21:48:29.204: INFO: Number of nodes with available pods: 1 May 5 21:48:29.204: INFO: Number of running nodes: 0, number of available pods: 1 May 5 21:48:30.219: INFO: Number of nodes with available pods: 0 May 5 21:48:30.219: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 5 21:48:30.262: INFO: Number of nodes with available pods: 0 May 5 21:48:30.262: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:31.265: INFO: Number of nodes with available pods: 0 May 5 21:48:31.265: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:32.266: INFO: Number of nodes with available pods: 0 May 5 21:48:32.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:33.265: INFO: Number of nodes with available pods: 0 May 5 21:48:33.265: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:34.266: INFO: Number of nodes with available pods: 0 May 5 21:48:34.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:35.265: INFO: Number of nodes with available pods: 0 May 5 21:48:35.265: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:36.266: INFO: Number of nodes with available pods: 0 May 5 21:48:36.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:37.266: INFO: Number of nodes with available pods: 0 May 5 21:48:37.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:38.266: INFO: Number of nodes with available pods: 0 May 5 21:48:38.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:39.290: INFO: Number of nodes with available pods: 0 May 5 21:48:39.290: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:40.267: INFO: Number of nodes with available pods: 0 May 5 21:48:40.267: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:41.266: INFO: Number of nodes with available pods: 0 May 5 21:48:41.266: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:42.267: INFO: Number of nodes with available pods: 0 May 5 21:48:42.267: INFO: Node jerma-worker is running more than one daemon pod May 5 21:48:43.277: INFO: Number of nodes with available pods: 1 May 5 21:48:43.278: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9555, will wait for the garbage collector to delete the pods May 5 21:48:43.341: INFO: Deleting DaemonSet.extensions daemon-set took: 5.96703ms May 5 21:48:43.642: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.212698ms May 5 21:48:59.345: INFO: Number of nodes with available pods: 0 May 5 21:48:59.345: INFO: Number of running nodes: 0, number of available pods: 0 May 5 21:48:59.348: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9555/daemonsets","resourceVersion":"13685892"},"items":null} May 5 21:48:59.350: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9555/pods","resourceVersion":"13685892"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:48:59.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9555" for this suite. • [SLOW TEST:35.420 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":173,"skipped":2730,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:48:59.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:06.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6544" for this suite. • [SLOW TEST:7.078 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":174,"skipped":2736,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:06.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:49:07.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:49:09.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:49:11.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312147, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:49:14.703: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:49:14.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:15.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3911" for this suite. STEP: Destroying namespace "webhook-3911-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.479 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":175,"skipped":2753,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:15.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:49:16.036: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:21.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3104" for this suite. • [SLOW TEST:5.702 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":176,"skipped":2756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:21.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:49:22.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 21:49:24.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312162, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312162, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312162, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312162, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:49:27.257: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:49:27.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5222-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:28.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2826" for this suite. STEP: Destroying namespace "webhook-2826-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":177,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:28.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 21:49:33.711: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:33.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8485" for this suite. • [SLOW TEST:5.448 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2805,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:33.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 5 21:49:34.055: INFO: Waiting up to 5m0s for pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436" in namespace "var-expansion-8641" to be "success or failure" May 5 21:49:34.058: INFO: Pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436": Phase="Pending", Reason="", readiness=false. Elapsed: 3.152519ms May 5 21:49:36.062: INFO: Pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007037794s May 5 21:49:38.066: INFO: Pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436": Phase="Running", Reason="", readiness=true. Elapsed: 4.010774458s May 5 21:49:40.070: INFO: Pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01462247s STEP: Saw pod success May 5 21:49:40.070: INFO: Pod "var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436" satisfied condition "success or failure" May 5 21:49:40.073: INFO: Trying to get logs from node jerma-worker pod var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436 container dapi-container: STEP: delete the pod May 5 21:49:40.097: INFO: Waiting for pod var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436 to disappear May 5 21:49:40.101: INFO: Pod var-expansion-f3ec6243-d8a6-4130-a24a-a4b1a47c3436 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:40.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8641" for this suite. • [SLOW TEST:6.180 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2821,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:40.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 21:49:40.223: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7" in namespace "projected-1190" to be "success or failure" May 5 21:49:40.261: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.335361ms May 5 21:49:42.265: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042027634s May 5 21:49:44.492: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269729744s May 5 21:49:46.497: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7": Phase="Running", Reason="", readiness=true. Elapsed: 6.274840237s May 5 21:49:48.508: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.28538085s STEP: Saw pod success May 5 21:49:48.508: INFO: Pod "downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7" satisfied condition "success or failure" May 5 21:49:48.510: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7 container client-container: STEP: delete the pod May 5 21:49:48.597: INFO: Waiting for pod downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7 to disappear May 5 21:49:49.256: INFO: Pod downwardapi-volume-6b1c02e9-8256-4c85-9c0c-950147357fb7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:49.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1190" for this suite. • [SLOW TEST:9.404 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2841,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:49.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-05276f11-3e1d-40d9-930e-f7b1fc1f3eaa STEP: Creating a pod to test consume secrets May 5 21:49:50.344: INFO: Waiting up to 5m0s for pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0" in namespace "secrets-2147" to be "success or failure" May 5 21:49:50.407: INFO: Pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0": Phase="Pending", Reason="", readiness=false. Elapsed: 63.37653ms May 5 21:49:52.434: INFO: Pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090280507s May 5 21:49:54.449: INFO: Pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104983748s May 5 21:49:56.452: INFO: Pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108276835s STEP: Saw pod success May 5 21:49:56.452: INFO: Pod "pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0" satisfied condition "success or failure" May 5 21:49:56.455: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0 container secret-volume-test: STEP: delete the pod May 5 21:49:56.486: INFO: Waiting for pod pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0 to disappear May 5 21:49:56.490: INFO: Pod pod-secrets-8be80d4e-ed98-4b34-98c0-3cb447e98da0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:49:56.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2147" for this suite. • [SLOW TEST:6.985 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2842,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:49:56.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-vbzj STEP: Creating a pod to test atomic-volume-subpath May 5 21:49:56.642: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vbzj" in namespace "subpath-5863" to be "success or failure" May 5 21:49:56.646: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485662ms May 5 21:49:58.745: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103034747s May 5 21:50:00.749: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 4.10670993s May 5 21:50:02.752: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 6.109385841s May 5 21:50:04.756: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 8.113226413s May 5 21:50:06.781: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 10.138562561s May 5 21:50:08.785: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 12.142687073s May 5 21:50:10.789: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 14.146377088s May 5 21:50:12.792: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 16.149838629s May 5 21:50:14.841: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 18.198171191s May 5 21:50:16.844: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 20.201963159s May 5 21:50:18.847: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 22.204707918s May 5 21:50:20.857: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 24.214734705s May 5 21:50:22.907: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Running", Reason="", readiness=true. Elapsed: 26.264919836s May 5 21:50:24.911: INFO: Pod "pod-subpath-test-projected-vbzj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.268263079s STEP: Saw pod success May 5 21:50:24.911: INFO: Pod "pod-subpath-test-projected-vbzj" satisfied condition "success or failure" May 5 21:50:24.913: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-vbzj container test-container-subpath-projected-vbzj: STEP: delete the pod May 5 21:50:24.960: INFO: Waiting for pod pod-subpath-test-projected-vbzj to disappear May 5 21:50:24.971: INFO: Pod pod-subpath-test-projected-vbzj no longer exists STEP: Deleting pod pod-subpath-test-projected-vbzj May 5 21:50:24.971: INFO: Deleting pod "pod-subpath-test-projected-vbzj" in namespace "subpath-5863" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:50:24.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5863" for this suite. • [SLOW TEST:28.564 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":182,"skipped":2870,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:50:25.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:50:42.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9106" for this suite. • [SLOW TEST:17.287 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":183,"skipped":2889,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:50:42.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 5 21:50:42.432: INFO: PodSpec: initContainers in spec.initContainers May 5 21:51:55.992: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b15e369d-4384-4099-b03c-b6984880ee5b", GenerateName:"", Namespace:"init-container-2187", SelfLink:"/api/v1/namespaces/init-container-2187/pods/pod-init-b15e369d-4384-4099-b03c-b6984880ee5b", UID:"b176ea74-9162-486f-ad2f-12868d109aa1", ResourceVersion:"13686817", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724312242, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"432806313"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dxclb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0023a8b00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dxclb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dxclb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dxclb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005102848), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc006f2a2a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051028d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0051028f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0051028f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0051028fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312242, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312242, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312242, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312242, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.180", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.180"}}, StartTime:(*v1.Time)(0xc00146d140), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00146d1a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000b86620)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://99a56abbe8f86ca779f6cafbd64c8f7947cd549814a306e55f1a7b6c9e33523e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00146d1c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00146d180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00510297f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:51:55.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2187" for this suite. • [SLOW TEST:74.015 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":184,"skipped":2891,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:51:56.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:51:57.323: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 5 21:52:01.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4296 create -f -' May 5 21:52:12.095: INFO: stderr: "" May 5 21:52:12.095: INFO: stdout: "e2e-test-crd-publish-openapi-3545-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 21:52:12.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4296 delete e2e-test-crd-publish-openapi-3545-crds test-cr' May 5 21:52:12.195: INFO: stderr: "" May 5 21:52:12.195: INFO: stdout: "e2e-test-crd-publish-openapi-3545-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 5 21:52:12.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4296 apply -f -' May 5 21:52:12.416: INFO: stderr: "" May 5 21:52:12.416: INFO: stdout: "e2e-test-crd-publish-openapi-3545-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 5 21:52:12.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4296 delete e2e-test-crd-publish-openapi-3545-crds test-cr' May 5 21:52:12.515: INFO: stderr: "" May 5 21:52:12.515: INFO: stdout: "e2e-test-crd-publish-openapi-3545-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 5 21:52:12.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3545-crds' May 5 21:52:12.726: INFO: stderr: "" May 5 21:52:12.726: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3545-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:52:15.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4296" for this suite. • [SLOW TEST:19.399 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":185,"skipped":2907,"failed":0} [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:52:15.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8643 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 21:52:15.881: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 21:53:38.298: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.3:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:53:38.298: INFO: >>> kubeConfig: /root/.kube/config I0505 21:53:38.320728 7 log.go:172] (0xc002a24840) (0xc0014dc460) Create stream I0505 21:53:38.320757 7 log.go:172] (0xc002a24840) (0xc0014dc460) Stream added, broadcasting: 1 I0505 21:53:38.322240 7 log.go:172] (0xc002a24840) Reply frame received for 1 I0505 21:53:38.322271 7 log.go:172] (0xc002a24840) (0xc001f0f900) Create stream I0505 21:53:38.322281 7 log.go:172] (0xc002a24840) (0xc001f0f900) Stream added, broadcasting: 3 I0505 21:53:38.323032 7 log.go:172] (0xc002a24840) Reply frame received for 3 I0505 21:53:38.323055 7 log.go:172] (0xc002a24840) (0xc0024cafa0) Create stream I0505 21:53:38.323065 7 log.go:172] (0xc002a24840) (0xc0024cafa0) Stream added, broadcasting: 5 I0505 21:53:38.323859 7 log.go:172] (0xc002a24840) Reply frame received for 5 I0505 21:53:38.547288 7 log.go:172] (0xc002a24840) Data frame received for 3 I0505 21:53:38.547311 7 log.go:172] (0xc001f0f900) (3) Data frame handling I0505 21:53:38.547319 7 log.go:172] (0xc001f0f900) (3) Data frame sent I0505 21:53:38.548244 7 log.go:172] (0xc002a24840) Data frame received for 5 I0505 21:53:38.548254 7 log.go:172] (0xc0024cafa0) (5) Data frame handling I0505 21:53:38.549074 7 log.go:172] (0xc002a24840) Data frame received for 3 I0505 21:53:38.549084 7 log.go:172] (0xc001f0f900) (3) Data frame handling I0505 21:53:38.555651 7 log.go:172] (0xc002a24840) Data frame received for 1 I0505 21:53:38.555675 7 log.go:172] (0xc0014dc460) (1) Data frame handling I0505 21:53:38.555684 7 log.go:172] (0xc0014dc460) (1) Data frame sent I0505 21:53:38.555693 7 log.go:172] (0xc002a24840) (0xc0014dc460) Stream removed, broadcasting: 1 I0505 21:53:38.555754 7 log.go:172] (0xc002a24840) (0xc0014dc460) Stream removed, broadcasting: 1 I0505 21:53:38.555763 7 log.go:172] (0xc002a24840) (0xc001f0f900) Stream removed, broadcasting: 3 I0505 21:53:38.555769 7 log.go:172] (0xc002a24840) (0xc0024cafa0) Stream removed, broadcasting: 5 May 5 21:53:38.555: INFO: Found all expected endpoints: [netserver-0] I0505 21:53:38.555972 7 log.go:172] (0xc002a24840) Go away received May 5 21:53:38.558: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.181:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 21:53:38.558: INFO: >>> kubeConfig: /root/.kube/config I0505 21:53:38.583221 7 log.go:172] (0xc002a24dc0) (0xc0014dc960) Create stream I0505 21:53:38.583247 7 log.go:172] (0xc002a24dc0) (0xc0014dc960) Stream added, broadcasting: 1 I0505 21:53:38.585101 7 log.go:172] (0xc002a24dc0) Reply frame received for 1 I0505 21:53:38.585287 7 log.go:172] (0xc002a24dc0) (0xc0014dca00) Create stream I0505 21:53:38.585299 7 log.go:172] (0xc002a24dc0) (0xc0014dca00) Stream added, broadcasting: 3 I0505 21:53:38.590660 7 log.go:172] (0xc002a24dc0) Reply frame received for 3 I0505 21:53:38.590699 7 log.go:172] (0xc002a24dc0) (0xc001f0fae0) Create stream I0505 21:53:38.590711 7 log.go:172] (0xc002a24dc0) (0xc001f0fae0) Stream added, broadcasting: 5 I0505 21:53:38.599907 7 log.go:172] (0xc002a24dc0) Reply frame received for 5 I0505 21:53:38.744735 7 log.go:172] (0xc002a24dc0) Data frame received for 5 I0505 21:53:38.744756 7 log.go:172] (0xc001f0fae0) (5) Data frame handling I0505 21:53:38.744776 7 log.go:172] (0xc002a24dc0) Data frame received for 3 I0505 21:53:38.744782 7 log.go:172] (0xc0014dca00) (3) Data frame handling I0505 21:53:38.744789 7 log.go:172] (0xc0014dca00) (3) Data frame sent I0505 21:53:38.744794 7 log.go:172] (0xc002a24dc0) Data frame received for 3 I0505 21:53:38.744799 7 log.go:172] (0xc0014dca00) (3) Data frame handling I0505 21:53:38.746340 7 log.go:172] (0xc002a24dc0) Data frame received for 1 I0505 21:53:38.746353 7 log.go:172] (0xc0014dc960) (1) Data frame handling I0505 21:53:38.746362 7 log.go:172] (0xc0014dc960) (1) Data frame sent I0505 21:53:38.746432 7 log.go:172] (0xc002a24dc0) (0xc0014dc960) Stream removed, broadcasting: 1 I0505 21:53:38.746496 7 log.go:172] (0xc002a24dc0) (0xc0014dc960) Stream removed, broadcasting: 1 I0505 21:53:38.746505 7 log.go:172] (0xc002a24dc0) (0xc0014dca00) Stream removed, broadcasting: 3 I0505 21:53:38.746512 7 log.go:172] (0xc002a24dc0) (0xc001f0fae0) Stream removed, broadcasting: 5 May 5 21:53:38.746: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:53:38.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0505 21:53:38.746738 7 log.go:172] (0xc002a24dc0) Go away received STEP: Destroying namespace "pod-network-test-8643" for this suite. • [SLOW TEST:82.989 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2907,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:53:38.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 21:53:39.648: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}}, CollisionCount:(*int32)(nil)} May 5 21:53:41.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:43.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:45.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:48.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:49.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:51.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:53.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:57.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:57.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:53:59.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:02.071: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:03.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:06.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:07.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:11.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:14.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:15.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:17.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:19.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:22.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:23.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:25.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:28.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:29.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:32.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:35.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:36.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:38.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:39.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:41.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:45.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:46.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:48.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:49.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:52.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:53.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:56.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:59.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:54:59.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:04.595: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:07.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:11.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:11.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:13.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:15.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:22.186: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:23.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:25.886: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:27.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:29.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:31.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:33.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:35.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:38.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:40.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:42.472: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:46.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:47.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:50.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:52.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:54.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:56.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:55:57.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:00.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:02.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:03.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:05.815: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:07.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:09.750: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:12.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:14.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:15.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:17.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:20.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:21.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:23.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:26.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:27.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:29.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:32.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:34.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:36.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:38.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:40.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:41.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:44.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:45.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:49.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 21:56:49.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312419, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 21:56:54.206: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:57:06.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3565" for this suite. STEP: Destroying namespace "webhook-3565-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:209.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":187,"skipped":2919,"failed":0} SSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:57:07.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 21:57:11.610: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814" in namespace "security-context-test-1405" to be "success or failure" May 5 21:57:12.918: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 1.308249958s May 5 21:57:16.650: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 5.040308951s May 5 21:57:18.890: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 7.280307733s May 5 21:57:24.589: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 12.979704792s May 5 21:57:27.044: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 15.434504646s May 5 21:57:29.189: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 17.57966897s May 5 21:57:31.385: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 19.775688691s May 5 21:57:33.409: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 21.799674326s May 5 21:57:35.413: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 23.803007622s May 5 21:57:37.433: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 25.82279483s May 5 21:57:40.020: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 28.410280489s May 5 21:57:42.475: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 30.865448862s May 5 21:57:44.805: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 33.195222013s May 5 21:57:47.081: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 35.470831208s May 5 21:57:49.235: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 37.625698605s May 5 21:57:52.541: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 40.931603538s May 5 21:57:54.627: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 43.017500845s May 5 21:57:56.631: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 45.02085133s May 5 21:57:58.633: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 47.023702853s May 5 21:58:00.937: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 49.326757666s May 5 21:58:02.940: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 51.330645282s May 5 21:58:04.943: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 53.333210615s May 5 21:58:07.188: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 55.578527487s May 5 21:58:10.117: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 58.507195116s May 5 21:58:13.340: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.730025634s May 5 21:58:15.847: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.237277951s May 5 21:58:17.876: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.266684961s May 5 21:58:20.080: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m8.470276537s May 5 21:58:20.080: INFO: Pod "alpine-nnp-false-0b08be32-b894-457d-b43b-db4957a0f814" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:58:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1405" for this suite. • [SLOW TEST:72.413 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2922,"failed":0} S ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:58:20.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-2f9ce30b-85cf-4e6a-8c99-0168fe7ea3b3 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-2f9ce30b-85cf-4e6a-8c99-0168fe7ea3b3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 21:59:51.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4876" for this suite. • [SLOW TEST:93.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2923,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 21:59:53.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 5 21:59:56.815: INFO: mount-test service account has no secret references STEP: getting the auto-created API token STEP: reading a file in the container May 5 22:00:27.244: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7356 pod-service-account-d7c56c8c-2d59-47e2-9db8-a2d5a9976bb7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 5 22:00:27.454: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7356 pod-service-account-d7c56c8c-2d59-47e2-9db8-a2d5a9976bb7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 5 22:00:27.634: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7356 pod-service-account-d7c56c8c-2d59-47e2-9db8-a2d5a9976bb7 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:00:28.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7356" for this suite. • [SLOW TEST:34.672 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":190,"skipped":2933,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:00:28.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:00:29.081: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 5 22:00:35.891: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:00:38.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3055" for this suite. • [SLOW TEST:15.093 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":191,"skipped":2940,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:00:43.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 5 22:00:52.520: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 5 22:00:57.565: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:00:59.663: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:01.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:03.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:05.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:07.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:09.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:11.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:13.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:15.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:17.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:19.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:21.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:24.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:25.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:27.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:32.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:33.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:36.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:01:37.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312852, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312853, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724312851, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:01:40.623: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:01:40.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:01:41.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4313" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:58.664 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":192,"skipped":2949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:01:41.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-fd0ca18a-f542-4bd8-ae2c-4bfd966fb67d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:01:42.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6899" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":193,"skipped":2976,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:01:42.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:01:42.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443" in namespace "downward-api-5704" to be "success or failure" May 5 22:01:42.102: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Pending", Reason="", readiness=false. Elapsed: 17.461717ms May 5 22:01:45.382: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Pending", Reason="", readiness=false. Elapsed: 3.297836296s May 5 22:01:49.345: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Pending", Reason="", readiness=false. Elapsed: 7.26094745s May 5 22:01:51.380: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Pending", Reason="", readiness=false. Elapsed: 9.296212592s May 5 22:01:53.386: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Pending", Reason="", readiness=false. Elapsed: 11.3016876s May 5 22:01:55.389: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.304831934s STEP: Saw pod success May 5 22:01:55.389: INFO: Pod "downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443" satisfied condition "success or failure" May 5 22:01:55.391: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443 container client-container: STEP: delete the pod May 5 22:01:56.291: INFO: Waiting for pod downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443 to disappear May 5 22:01:56.302: INFO: Pod downwardapi-volume-15fde664-6fe7-43e5-96ac-6dbc41dd1443 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:01:56.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5704" for this suite. • [SLOW TEST:14.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":2991,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:01:56.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 5 22:02:24.527: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:24.527: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:24.573689 7 log.go:172] (0xc002a248f0) (0xc0014ace60) Create stream I0505 22:02:24.573757 7 log.go:172] (0xc002a248f0) (0xc0014ace60) Stream added, broadcasting: 1 I0505 22:02:24.577313 7 log.go:172] (0xc002a248f0) Reply frame received for 1 I0505 22:02:24.577401 7 log.go:172] (0xc002a248f0) (0xc001a0c5a0) Create stream I0505 22:02:24.577443 7 log.go:172] (0xc002a248f0) (0xc001a0c5a0) Stream added, broadcasting: 3 I0505 22:02:24.579597 7 log.go:172] (0xc002a248f0) Reply frame received for 3 I0505 22:02:24.579638 7 log.go:172] (0xc002a248f0) (0xc002a04000) Create stream I0505 22:02:24.579653 7 log.go:172] (0xc002a248f0) (0xc002a04000) Stream added, broadcasting: 5 I0505 22:02:24.580757 7 log.go:172] (0xc002a248f0) Reply frame received for 5 I0505 22:02:24.653486 7 log.go:172] (0xc002a248f0) Data frame received for 3 I0505 22:02:24.653508 7 log.go:172] (0xc001a0c5a0) (3) Data frame handling I0505 22:02:24.653523 7 log.go:172] (0xc001a0c5a0) (3) Data frame sent I0505 22:02:24.662586 7 log.go:172] (0xc002a248f0) Data frame received for 5 I0505 22:02:24.662614 7 log.go:172] (0xc002a04000) (5) Data frame handling I0505 22:02:24.662647 7 log.go:172] (0xc002a248f0) Data frame received for 3 I0505 22:02:24.662664 7 log.go:172] (0xc001a0c5a0) (3) Data frame handling I0505 22:02:24.663890 7 log.go:172] (0xc002a248f0) Data frame received for 1 I0505 22:02:24.663909 7 log.go:172] (0xc0014ace60) (1) Data frame handling I0505 22:02:24.663930 7 log.go:172] (0xc0014ace60) (1) Data frame sent I0505 22:02:24.664064 7 log.go:172] (0xc002a248f0) (0xc0014ace60) Stream removed, broadcasting: 1 I0505 22:02:24.664091 7 log.go:172] (0xc002a248f0) Go away received I0505 22:02:24.664185 7 log.go:172] (0xc002a248f0) (0xc0014ace60) Stream removed, broadcasting: 1 I0505 22:02:24.664206 7 log.go:172] (0xc002a248f0) (0xc001a0c5a0) Stream removed, broadcasting: 3 I0505 22:02:24.664216 7 log.go:172] (0xc002a248f0) (0xc002a04000) Stream removed, broadcasting: 5 May 5 22:02:24.664: INFO: Exec stderr: "" May 5 22:02:24.664: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:24.664: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:24.690339 7 log.go:172] (0xc002081600) (0xc000cfa6e0) Create stream I0505 22:02:24.690358 7 log.go:172] (0xc002081600) (0xc000cfa6e0) Stream added, broadcasting: 1 I0505 22:02:24.692422 7 log.go:172] (0xc002081600) Reply frame received for 1 I0505 22:02:24.692459 7 log.go:172] (0xc002081600) (0xc0014acf00) Create stream I0505 22:02:24.692471 7 log.go:172] (0xc002081600) (0xc0014acf00) Stream added, broadcasting: 3 I0505 22:02:24.693606 7 log.go:172] (0xc002081600) Reply frame received for 3 I0505 22:02:24.693648 7 log.go:172] (0xc002081600) (0xc0014acfa0) Create stream I0505 22:02:24.693663 7 log.go:172] (0xc002081600) (0xc0014acfa0) Stream added, broadcasting: 5 I0505 22:02:24.694422 7 log.go:172] (0xc002081600) Reply frame received for 5 I0505 22:02:24.748726 7 log.go:172] (0xc002081600) Data frame received for 5 I0505 22:02:24.748748 7 log.go:172] (0xc0014acfa0) (5) Data frame handling I0505 22:02:24.748782 7 log.go:172] (0xc002081600) Data frame received for 3 I0505 22:02:24.748803 7 log.go:172] (0xc0014acf00) (3) Data frame handling I0505 22:02:24.748833 7 log.go:172] (0xc0014acf00) (3) Data frame sent I0505 22:02:24.748843 7 log.go:172] (0xc002081600) Data frame received for 3 I0505 22:02:24.748850 7 log.go:172] (0xc0014acf00) (3) Data frame handling I0505 22:02:24.749466 7 log.go:172] (0xc002081600) Data frame received for 1 I0505 22:02:24.749505 7 log.go:172] (0xc000cfa6e0) (1) Data frame handling I0505 22:02:24.749521 7 log.go:172] (0xc000cfa6e0) (1) Data frame sent I0505 22:02:24.749591 7 log.go:172] (0xc002081600) (0xc000cfa6e0) Stream removed, broadcasting: 1 I0505 22:02:24.749653 7 log.go:172] (0xc002081600) (0xc000cfa6e0) Stream removed, broadcasting: 1 I0505 22:02:24.749661 7 log.go:172] (0xc002081600) (0xc0014acf00) Stream removed, broadcasting: 3 I0505 22:02:24.749686 7 log.go:172] (0xc002081600) Go away received I0505 22:02:24.749727 7 log.go:172] (0xc002081600) (0xc0014acfa0) Stream removed, broadcasting: 5 May 5 22:02:24.749: INFO: Exec stderr: "" May 5 22:02:24.749: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:24.749: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:24.776803 7 log.go:172] (0xc002a25080) (0xc0014ad360) Create stream I0505 22:02:24.776832 7 log.go:172] (0xc002a25080) (0xc0014ad360) Stream added, broadcasting: 1 I0505 22:02:24.778605 7 log.go:172] (0xc002a25080) Reply frame received for 1 I0505 22:02:24.778639 7 log.go:172] (0xc002a25080) (0xc0014ad4a0) Create stream I0505 22:02:24.778648 7 log.go:172] (0xc002a25080) (0xc0014ad4a0) Stream added, broadcasting: 3 I0505 22:02:24.779295 7 log.go:172] (0xc002a25080) Reply frame received for 3 I0505 22:02:24.779329 7 log.go:172] (0xc002a25080) (0xc0014ad680) Create stream I0505 22:02:24.779342 7 log.go:172] (0xc002a25080) (0xc0014ad680) Stream added, broadcasting: 5 I0505 22:02:24.780371 7 log.go:172] (0xc002a25080) Reply frame received for 5 I0505 22:02:24.878407 7 log.go:172] (0xc002a25080) Data frame received for 3 I0505 22:02:24.878444 7 log.go:172] (0xc002a25080) Data frame received for 5 I0505 22:02:24.878480 7 log.go:172] (0xc0014ad680) (5) Data frame handling I0505 22:02:24.878510 7 log.go:172] (0xc0014ad4a0) (3) Data frame handling I0505 22:02:24.878537 7 log.go:172] (0xc0014ad4a0) (3) Data frame sent I0505 22:02:24.878548 7 log.go:172] (0xc002a25080) Data frame received for 3 I0505 22:02:24.878560 7 log.go:172] (0xc0014ad4a0) (3) Data frame handling I0505 22:02:24.879355 7 log.go:172] (0xc002a25080) Data frame received for 1 I0505 22:02:24.879383 7 log.go:172] (0xc0014ad360) (1) Data frame handling I0505 22:02:24.879408 7 log.go:172] (0xc0014ad360) (1) Data frame sent I0505 22:02:24.879515 7 log.go:172] (0xc002a25080) (0xc0014ad360) Stream removed, broadcasting: 1 I0505 22:02:24.879553 7 log.go:172] (0xc002a25080) Go away received I0505 22:02:24.879628 7 log.go:172] (0xc002a25080) (0xc0014ad360) Stream removed, broadcasting: 1 I0505 22:02:24.879645 7 log.go:172] (0xc002a25080) (0xc0014ad4a0) Stream removed, broadcasting: 3 I0505 22:02:24.879657 7 log.go:172] (0xc002a25080) (0xc0014ad680) Stream removed, broadcasting: 5 May 5 22:02:24.879: INFO: Exec stderr: "" May 5 22:02:24.879: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:24.879: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:24.905855 7 log.go:172] (0xc002a256b0) (0xc0014adae0) Create stream I0505 22:02:24.905892 7 log.go:172] (0xc002a256b0) (0xc0014adae0) Stream added, broadcasting: 1 I0505 22:02:24.907618 7 log.go:172] (0xc002a256b0) Reply frame received for 1 I0505 22:02:24.907657 7 log.go:172] (0xc002a256b0) (0xc0014adb80) Create stream I0505 22:02:24.907668 7 log.go:172] (0xc002a256b0) (0xc0014adb80) Stream added, broadcasting: 3 I0505 22:02:24.908582 7 log.go:172] (0xc002a256b0) Reply frame received for 3 I0505 22:02:24.908609 7 log.go:172] (0xc002a256b0) (0xc0014add60) Create stream I0505 22:02:24.908616 7 log.go:172] (0xc002a256b0) (0xc0014add60) Stream added, broadcasting: 5 I0505 22:02:24.909515 7 log.go:172] (0xc002a256b0) Reply frame received for 5 I0505 22:02:24.962241 7 log.go:172] (0xc002a256b0) Data frame received for 5 I0505 22:02:24.962273 7 log.go:172] (0xc0014add60) (5) Data frame handling I0505 22:02:24.962301 7 log.go:172] (0xc002a256b0) Data frame received for 3 I0505 22:02:24.962332 7 log.go:172] (0xc0014adb80) (3) Data frame handling I0505 22:02:24.962356 7 log.go:172] (0xc0014adb80) (3) Data frame sent I0505 22:02:24.962377 7 log.go:172] (0xc002a256b0) Data frame received for 3 I0505 22:02:24.962398 7 log.go:172] (0xc0014adb80) (3) Data frame handling I0505 22:02:24.963786 7 log.go:172] (0xc002a256b0) Data frame received for 1 I0505 22:02:24.963804 7 log.go:172] (0xc0014adae0) (1) Data frame handling I0505 22:02:24.963836 7 log.go:172] (0xc0014adae0) (1) Data frame sent I0505 22:02:24.963926 7 log.go:172] (0xc002a256b0) (0xc0014adae0) Stream removed, broadcasting: 1 I0505 22:02:24.963989 7 log.go:172] (0xc002a256b0) Go away received I0505 22:02:24.964032 7 log.go:172] (0xc002a256b0) (0xc0014adae0) Stream removed, broadcasting: 1 I0505 22:02:24.964050 7 log.go:172] (0xc002a256b0) (0xc0014adb80) Stream removed, broadcasting: 3 I0505 22:02:24.964061 7 log.go:172] (0xc002a256b0) (0xc0014add60) Stream removed, broadcasting: 5 May 5 22:02:24.964: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 5 22:02:24.964: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:24.964: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:24.994203 7 log.go:172] (0xc002a25ce0) (0xc0013dc280) Create stream I0505 22:02:24.994238 7 log.go:172] (0xc002a25ce0) (0xc0013dc280) Stream added, broadcasting: 1 I0505 22:02:24.995820 7 log.go:172] (0xc002a25ce0) Reply frame received for 1 I0505 22:02:24.995854 7 log.go:172] (0xc002a25ce0) (0xc0016d3680) Create stream I0505 22:02:24.995867 7 log.go:172] (0xc002a25ce0) (0xc0016d3680) Stream added, broadcasting: 3 I0505 22:02:24.996704 7 log.go:172] (0xc002a25ce0) Reply frame received for 3 I0505 22:02:24.996738 7 log.go:172] (0xc002a25ce0) (0xc0013dc640) Create stream I0505 22:02:24.996752 7 log.go:172] (0xc002a25ce0) (0xc0013dc640) Stream added, broadcasting: 5 I0505 22:02:24.997792 7 log.go:172] (0xc002a25ce0) Reply frame received for 5 I0505 22:02:25.053199 7 log.go:172] (0xc002a25ce0) Data frame received for 5 I0505 22:02:25.053250 7 log.go:172] (0xc0013dc640) (5) Data frame handling I0505 22:02:25.053284 7 log.go:172] (0xc002a25ce0) Data frame received for 3 I0505 22:02:25.053314 7 log.go:172] (0xc0016d3680) (3) Data frame handling I0505 22:02:25.053333 7 log.go:172] (0xc0016d3680) (3) Data frame sent I0505 22:02:25.053444 7 log.go:172] (0xc002a25ce0) Data frame received for 3 I0505 22:02:25.053478 7 log.go:172] (0xc0016d3680) (3) Data frame handling I0505 22:02:25.054526 7 log.go:172] (0xc002a25ce0) Data frame received for 1 I0505 22:02:25.054595 7 log.go:172] (0xc0013dc280) (1) Data frame handling I0505 22:02:25.054645 7 log.go:172] (0xc0013dc280) (1) Data frame sent I0505 22:02:25.054681 7 log.go:172] (0xc002a25ce0) (0xc0013dc280) Stream removed, broadcasting: 1 I0505 22:02:25.054710 7 log.go:172] (0xc002a25ce0) Go away received I0505 22:02:25.054818 7 log.go:172] (0xc002a25ce0) (0xc0013dc280) Stream removed, broadcasting: 1 I0505 22:02:25.054840 7 log.go:172] (0xc002a25ce0) (0xc0016d3680) Stream removed, broadcasting: 3 I0505 22:02:25.054860 7 log.go:172] (0xc002a25ce0) (0xc0013dc640) Stream removed, broadcasting: 5 May 5 22:02:25.054: INFO: Exec stderr: "" May 5 22:02:25.054: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:25.054: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:25.086177 7 log.go:172] (0xc001810420) (0xc001a0d180) Create stream I0505 22:02:25.086203 7 log.go:172] (0xc001810420) (0xc001a0d180) Stream added, broadcasting: 1 I0505 22:02:25.088027 7 log.go:172] (0xc001810420) Reply frame received for 1 I0505 22:02:25.088065 7 log.go:172] (0xc001810420) (0xc0013dca00) Create stream I0505 22:02:25.088076 7 log.go:172] (0xc001810420) (0xc0013dca00) Stream added, broadcasting: 3 I0505 22:02:25.088840 7 log.go:172] (0xc001810420) Reply frame received for 3 I0505 22:02:25.088876 7 log.go:172] (0xc001810420) (0xc0016d3720) Create stream I0505 22:02:25.088890 7 log.go:172] (0xc001810420) (0xc0016d3720) Stream added, broadcasting: 5 I0505 22:02:25.089818 7 log.go:172] (0xc001810420) Reply frame received for 5 I0505 22:02:25.144328 7 log.go:172] (0xc001810420) Data frame received for 5 I0505 22:02:25.144361 7 log.go:172] (0xc0016d3720) (5) Data frame handling I0505 22:02:25.144385 7 log.go:172] (0xc001810420) Data frame received for 3 I0505 22:02:25.144401 7 log.go:172] (0xc0013dca00) (3) Data frame handling I0505 22:02:25.144413 7 log.go:172] (0xc0013dca00) (3) Data frame sent I0505 22:02:25.144424 7 log.go:172] (0xc001810420) Data frame received for 3 I0505 22:02:25.144435 7 log.go:172] (0xc0013dca00) (3) Data frame handling I0505 22:02:25.146331 7 log.go:172] (0xc001810420) Data frame received for 1 I0505 22:02:25.146352 7 log.go:172] (0xc001a0d180) (1) Data frame handling I0505 22:02:25.146368 7 log.go:172] (0xc001a0d180) (1) Data frame sent I0505 22:02:25.146378 7 log.go:172] (0xc001810420) (0xc001a0d180) Stream removed, broadcasting: 1 I0505 22:02:25.146396 7 log.go:172] (0xc001810420) Go away received I0505 22:02:25.146511 7 log.go:172] (0xc001810420) (0xc001a0d180) Stream removed, broadcasting: 1 I0505 22:02:25.146530 7 log.go:172] (0xc001810420) (0xc0013dca00) Stream removed, broadcasting: 3 I0505 22:02:25.146539 7 log.go:172] (0xc001810420) (0xc0016d3720) Stream removed, broadcasting: 5 May 5 22:02:25.146: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 5 22:02:25.146: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:25.146: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:25.175775 7 log.go:172] (0xc001810bb0) (0xc001a0dcc0) Create stream I0505 22:02:25.175801 7 log.go:172] (0xc001810bb0) (0xc001a0dcc0) Stream added, broadcasting: 1 I0505 22:02:25.177091 7 log.go:172] (0xc001810bb0) Reply frame received for 1 I0505 22:02:25.177452 7 log.go:172] (0xc001810bb0) (0xc000cfa960) Create stream I0505 22:02:25.177473 7 log.go:172] (0xc001810bb0) (0xc000cfa960) Stream added, broadcasting: 3 I0505 22:02:25.178430 7 log.go:172] (0xc001810bb0) Reply frame received for 3 I0505 22:02:25.178464 7 log.go:172] (0xc001810bb0) (0xc000cfabe0) Create stream I0505 22:02:25.178482 7 log.go:172] (0xc001810bb0) (0xc000cfabe0) Stream added, broadcasting: 5 I0505 22:02:25.179241 7 log.go:172] (0xc001810bb0) Reply frame received for 5 I0505 22:02:25.255665 7 log.go:172] (0xc001810bb0) Data frame received for 5 I0505 22:02:25.255691 7 log.go:172] (0xc000cfabe0) (5) Data frame handling I0505 22:02:25.255708 7 log.go:172] (0xc001810bb0) Data frame received for 3 I0505 22:02:25.255716 7 log.go:172] (0xc000cfa960) (3) Data frame handling I0505 22:02:25.255732 7 log.go:172] (0xc000cfa960) (3) Data frame sent I0505 22:02:25.255740 7 log.go:172] (0xc001810bb0) Data frame received for 3 I0505 22:02:25.255747 7 log.go:172] (0xc000cfa960) (3) Data frame handling I0505 22:02:25.256851 7 log.go:172] (0xc001810bb0) Data frame received for 1 I0505 22:02:25.256878 7 log.go:172] (0xc001a0dcc0) (1) Data frame handling I0505 22:02:25.256893 7 log.go:172] (0xc001a0dcc0) (1) Data frame sent I0505 22:02:25.256903 7 log.go:172] (0xc001810bb0) (0xc001a0dcc0) Stream removed, broadcasting: 1 I0505 22:02:25.256913 7 log.go:172] (0xc001810bb0) Go away received I0505 22:02:25.256995 7 log.go:172] (0xc001810bb0) (0xc001a0dcc0) Stream removed, broadcasting: 1 I0505 22:02:25.257024 7 log.go:172] (0xc001810bb0) (0xc000cfa960) Stream removed, broadcasting: 3 I0505 22:02:25.257047 7 log.go:172] (0xc001810bb0) (0xc000cfabe0) Stream removed, broadcasting: 5 May 5 22:02:25.257: INFO: Exec stderr: "" May 5 22:02:25.257: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:25.257: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:25.282618 7 log.go:172] (0xc002081d90) (0xc00104e3c0) Create stream I0505 22:02:25.282670 7 log.go:172] (0xc002081d90) (0xc00104e3c0) Stream added, broadcasting: 1 I0505 22:02:25.283922 7 log.go:172] (0xc002081d90) Reply frame received for 1 I0505 22:02:25.283954 7 log.go:172] (0xc002081d90) (0xc0013dd0e0) Create stream I0505 22:02:25.283964 7 log.go:172] (0xc002081d90) (0xc0013dd0e0) Stream added, broadcasting: 3 I0505 22:02:25.284637 7 log.go:172] (0xc002081d90) Reply frame received for 3 I0505 22:02:25.284658 7 log.go:172] (0xc002081d90) (0xc001a0dd60) Create stream I0505 22:02:25.284667 7 log.go:172] (0xc002081d90) (0xc001a0dd60) Stream added, broadcasting: 5 I0505 22:02:25.285326 7 log.go:172] (0xc002081d90) Reply frame received for 5 I0505 22:02:25.369670 7 log.go:172] (0xc002081d90) Data frame received for 3 I0505 22:02:25.369747 7 log.go:172] (0xc0013dd0e0) (3) Data frame handling I0505 22:02:25.369784 7 log.go:172] (0xc0013dd0e0) (3) Data frame sent I0505 22:02:25.369818 7 log.go:172] (0xc002081d90) Data frame received for 3 I0505 22:02:25.369852 7 log.go:172] (0xc0013dd0e0) (3) Data frame handling I0505 22:02:25.369876 7 log.go:172] (0xc002081d90) Data frame received for 5 I0505 22:02:25.369885 7 log.go:172] (0xc001a0dd60) (5) Data frame handling I0505 22:02:25.370989 7 log.go:172] (0xc002081d90) Data frame received for 1 I0505 22:02:25.371012 7 log.go:172] (0xc00104e3c0) (1) Data frame handling I0505 22:02:25.371033 7 log.go:172] (0xc00104e3c0) (1) Data frame sent I0505 22:02:25.371047 7 log.go:172] (0xc002081d90) (0xc00104e3c0) Stream removed, broadcasting: 1 I0505 22:02:25.371095 7 log.go:172] (0xc002081d90) (0xc00104e3c0) Stream removed, broadcasting: 1 I0505 22:02:25.371106 7 log.go:172] (0xc002081d90) (0xc0013dd0e0) Stream removed, broadcasting: 3 I0505 22:02:25.371114 7 log.go:172] (0xc002081d90) (0xc001a0dd60) Stream removed, broadcasting: 5 May 5 22:02:25.371: INFO: Exec stderr: "" May 5 22:02:25.371: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:25.371: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:25.372457 7 log.go:172] (0xc002081d90) Go away received I0505 22:02:25.395550 7 log.go:172] (0xc001810f20) (0xc0014dc000) Create stream I0505 22:02:25.395573 7 log.go:172] (0xc001810f20) (0xc0014dc000) Stream added, broadcasting: 1 I0505 22:02:25.397350 7 log.go:172] (0xc001810f20) Reply frame received for 1 I0505 22:02:25.397371 7 log.go:172] (0xc001810f20) (0xc0014dc140) Create stream I0505 22:02:25.397378 7 log.go:172] (0xc001810f20) (0xc0014dc140) Stream added, broadcasting: 3 I0505 22:02:25.398122 7 log.go:172] (0xc001810f20) Reply frame received for 3 I0505 22:02:25.398137 7 log.go:172] (0xc001810f20) (0xc00104e960) Create stream I0505 22:02:25.398146 7 log.go:172] (0xc001810f20) (0xc00104e960) Stream added, broadcasting: 5 I0505 22:02:25.398839 7 log.go:172] (0xc001810f20) Reply frame received for 5 I0505 22:02:25.464287 7 log.go:172] (0xc001810f20) Data frame received for 5 I0505 22:02:25.464316 7 log.go:172] (0xc00104e960) (5) Data frame handling I0505 22:02:25.464344 7 log.go:172] (0xc001810f20) Data frame received for 3 I0505 22:02:25.464387 7 log.go:172] (0xc0014dc140) (3) Data frame handling I0505 22:02:25.464424 7 log.go:172] (0xc0014dc140) (3) Data frame sent I0505 22:02:25.464447 7 log.go:172] (0xc001810f20) Data frame received for 3 I0505 22:02:25.464465 7 log.go:172] (0xc0014dc140) (3) Data frame handling I0505 22:02:25.465936 7 log.go:172] (0xc001810f20) Data frame received for 1 I0505 22:02:25.465961 7 log.go:172] (0xc0014dc000) (1) Data frame handling I0505 22:02:25.465992 7 log.go:172] (0xc0014dc000) (1) Data frame sent I0505 22:02:25.466017 7 log.go:172] (0xc001810f20) (0xc0014dc000) Stream removed, broadcasting: 1 I0505 22:02:25.466036 7 log.go:172] (0xc001810f20) Go away received I0505 22:02:25.466115 7 log.go:172] (0xc001810f20) (0xc0014dc000) Stream removed, broadcasting: 1 I0505 22:02:25.466128 7 log.go:172] (0xc001810f20) (0xc0014dc140) Stream removed, broadcasting: 3 I0505 22:02:25.466137 7 log.go:172] (0xc001810f20) (0xc00104e960) Stream removed, broadcasting: 5 May 5 22:02:25.466: INFO: Exec stderr: "" May 5 22:02:25.466: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3202 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:02:25.466: INFO: >>> kubeConfig: /root/.kube/config I0505 22:02:25.496596 7 log.go:172] (0xc002c56420) (0xc0013dda40) Create stream I0505 22:02:25.496654 7 log.go:172] (0xc002c56420) (0xc0013dda40) Stream added, broadcasting: 1 I0505 22:02:25.498567 7 log.go:172] (0xc002c56420) Reply frame received for 1 I0505 22:02:25.498601 7 log.go:172] (0xc002c56420) (0xc002a04140) Create stream I0505 22:02:25.498614 7 log.go:172] (0xc002c56420) (0xc002a04140) Stream added, broadcasting: 3 I0505 22:02:25.499426 7 log.go:172] (0xc002c56420) Reply frame received for 3 I0505 22:02:25.499461 7 log.go:172] (0xc002c56420) (0xc002a04280) Create stream I0505 22:02:25.499472 7 log.go:172] (0xc002c56420) (0xc002a04280) Stream added, broadcasting: 5 I0505 22:02:25.500360 7 log.go:172] (0xc002c56420) Reply frame received for 5 I0505 22:02:25.572915 7 log.go:172] (0xc002c56420) Data frame received for 3 I0505 22:02:25.573002 7 log.go:172] (0xc002a04140) (3) Data frame handling I0505 22:02:25.573023 7 log.go:172] (0xc002a04140) (3) Data frame sent I0505 22:02:25.573042 7 log.go:172] (0xc002c56420) Data frame received for 3 I0505 22:02:25.573067 7 log.go:172] (0xc002a04140) (3) Data frame handling I0505 22:02:25.573107 7 log.go:172] (0xc002c56420) Data frame received for 5 I0505 22:02:25.573382 7 log.go:172] (0xc002a04280) (5) Data frame handling I0505 22:02:25.574193 7 log.go:172] (0xc002c56420) Data frame received for 1 I0505 22:02:25.574218 7 log.go:172] (0xc0013dda40) (1) Data frame handling I0505 22:02:25.574231 7 log.go:172] (0xc0013dda40) (1) Data frame sent I0505 22:02:25.574253 7 log.go:172] (0xc002c56420) (0xc0013dda40) Stream removed, broadcasting: 1 I0505 22:02:25.574283 7 log.go:172] (0xc002c56420) Go away received I0505 22:02:25.574316 7 log.go:172] (0xc002c56420) (0xc0013dda40) Stream removed, broadcasting: 1 I0505 22:02:25.574329 7 log.go:172] (0xc002c56420) (0xc002a04140) Stream removed, broadcasting: 3 I0505 22:02:25.574338 7 log.go:172] (0xc002c56420) (0xc002a04280) Stream removed, broadcasting: 5 May 5 22:02:25.574: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:02:25.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3202" for this suite. • [SLOW TEST:29.277 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:02:25.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0505 22:03:13.024457 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 22:03:13.024: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:03:13.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1687" for this suite. • [SLOW TEST:47.638 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":196,"skipped":3065,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:03:13.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 22:03:15.971: INFO: Waiting up to 5m0s for pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206" in namespace "downward-api-3758" to be "success or failure" May 5 22:03:15.974: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.682647ms May 5 22:03:17.978: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00627612s May 5 22:03:20.957: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Pending", Reason="", readiness=false. Elapsed: 4.985407508s May 5 22:03:22.964: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Pending", Reason="", readiness=false. Elapsed: 6.993168474s May 5 22:03:25.074: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Running", Reason="", readiness=true. Elapsed: 9.102599848s May 5 22:03:27.180: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.208922776s STEP: Saw pod success May 5 22:03:27.180: INFO: Pod "downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206" satisfied condition "success or failure" May 5 22:03:27.242: INFO: Trying to get logs from node jerma-worker pod downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206 container dapi-container: STEP: delete the pod May 5 22:03:27.538: INFO: Waiting for pod downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206 to disappear May 5 22:03:27.558: INFO: Pod downward-api-c2e399bd-b3f7-4cac-9060-f71eaac7f206 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:03:27.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3758" for this suite. • [SLOW TEST:14.553 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3104,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:03:27.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1695 STEP: creating replication controller nodeport-test in namespace services-1695 I0505 22:03:29.691093 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1695, replica count: 2 I0505 22:03:32.742570 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:03:35.742804 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 22:03:35.742: INFO: Creating new exec pod May 5 22:03:42.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1695 execpod85wt7 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 5 22:03:45.825: INFO: stderr: "I0505 22:03:45.648962 3204 log.go:172] (0xc0000d8370) (0xc000787400) Create stream\nI0505 22:03:45.649084 3204 log.go:172] (0xc0000d8370) (0xc000787400) Stream added, broadcasting: 1\nI0505 22:03:45.650790 3204 log.go:172] (0xc0000d8370) Reply frame received for 1\nI0505 22:03:45.650817 3204 log.go:172] (0xc0000d8370) (0xc000bb6000) Create stream\nI0505 22:03:45.650828 3204 log.go:172] (0xc0000d8370) (0xc000bb6000) Stream added, broadcasting: 3\nI0505 22:03:45.651594 3204 log.go:172] (0xc0000d8370) Reply frame received for 3\nI0505 22:03:45.651623 3204 log.go:172] (0xc0000d8370) (0xc00068f9a0) Create stream\nI0505 22:03:45.651632 3204 log.go:172] (0xc0000d8370) (0xc00068f9a0) Stream added, broadcasting: 5\nI0505 22:03:45.652297 3204 log.go:172] (0xc0000d8370) Reply frame received for 5\nI0505 22:03:45.771496 3204 log.go:172] (0xc0000d8370) Data frame received for 5\nI0505 22:03:45.771523 3204 log.go:172] (0xc00068f9a0) (5) Data frame handling\nI0505 22:03:45.771537 3204 log.go:172] (0xc00068f9a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0505 22:03:45.818602 3204 log.go:172] (0xc0000d8370) Data frame received for 5\nI0505 22:03:45.818679 3204 log.go:172] (0xc00068f9a0) (5) Data frame handling\nI0505 22:03:45.818707 3204 log.go:172] (0xc00068f9a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0505 22:03:45.819899 3204 log.go:172] (0xc0000d8370) Data frame received for 3\nI0505 22:03:45.819958 3204 log.go:172] (0xc000bb6000) (3) Data frame handling\nI0505 22:03:45.819993 3204 log.go:172] (0xc0000d8370) Data frame received for 5\nI0505 22:03:45.820034 3204 log.go:172] (0xc00068f9a0) (5) Data frame handling\nI0505 22:03:45.820875 3204 log.go:172] (0xc0000d8370) Data frame received for 1\nI0505 22:03:45.820911 3204 log.go:172] (0xc000787400) (1) Data frame handling\nI0505 22:03:45.820946 3204 log.go:172] (0xc000787400) (1) Data frame sent\nI0505 22:03:45.820989 3204 log.go:172] (0xc0000d8370) (0xc000787400) Stream removed, broadcasting: 1\nI0505 22:03:45.821022 3204 log.go:172] (0xc0000d8370) Go away received\nI0505 22:03:45.821258 3204 log.go:172] (0xc0000d8370) (0xc000787400) Stream removed, broadcasting: 1\nI0505 22:03:45.821271 3204 log.go:172] (0xc0000d8370) (0xc000bb6000) Stream removed, broadcasting: 3\nI0505 22:03:45.821280 3204 log.go:172] (0xc0000d8370) (0xc00068f9a0) Stream removed, broadcasting: 5\n" May 5 22:03:45.825: INFO: stdout: "" May 5 22:03:45.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1695 execpod85wt7 -- /bin/sh -x -c nc -zv -t -w 2 10.97.67.237 80' May 5 22:03:46.146: INFO: stderr: "I0505 22:03:45.989475 3226 log.go:172] (0xc000b29340) (0xc0006d85a0) Create stream\nI0505 22:03:45.989526 3226 log.go:172] (0xc000b29340) (0xc0006d85a0) Stream added, broadcasting: 1\nI0505 22:03:45.997518 3226 log.go:172] (0xc000b29340) Reply frame received for 1\nI0505 22:03:45.997563 3226 log.go:172] (0xc000b29340) (0xc000699ea0) Create stream\nI0505 22:03:45.997573 3226 log.go:172] (0xc000b29340) (0xc000699ea0) Stream added, broadcasting: 3\nI0505 22:03:46.000291 3226 log.go:172] (0xc000b29340) Reply frame received for 3\nI0505 22:03:46.000356 3226 log.go:172] (0xc000b29340) (0xc000449cc0) Create stream\nI0505 22:03:46.000389 3226 log.go:172] (0xc000b29340) (0xc000449cc0) Stream added, broadcasting: 5\nI0505 22:03:46.005825 3226 log.go:172] (0xc000b29340) Reply frame received for 5\nI0505 22:03:46.129177 3226 log.go:172] (0xc000b29340) Data frame received for 5\nI0505 22:03:46.129342 3226 log.go:172] (0xc000449cc0) (5) Data frame handling\nI0505 22:03:46.129386 3226 log.go:172] (0xc000449cc0) (5) Data frame sent\n+ nc -zv -t -w 2 10.97.67.237 80\nI0505 22:03:46.133955 3226 log.go:172] (0xc000b29340) Data frame received for 5\nI0505 22:03:46.133978 3226 log.go:172] (0xc000449cc0) (5) Data frame handling\nI0505 22:03:46.133988 3226 log.go:172] (0xc000449cc0) (5) Data frame sent\nConnection to 10.97.67.237 80 port [tcp/http] succeeded!\nI0505 22:03:46.138328 3226 log.go:172] (0xc000b29340) Data frame received for 5\nI0505 22:03:46.138399 3226 log.go:172] (0xc000449cc0) (5) Data frame handling\nI0505 22:03:46.138441 3226 log.go:172] (0xc000b29340) Data frame received for 3\nI0505 22:03:46.138476 3226 log.go:172] (0xc000699ea0) (3) Data frame handling\nI0505 22:03:46.139319 3226 log.go:172] (0xc000b29340) Data frame received for 1\nI0505 22:03:46.139330 3226 log.go:172] (0xc0006d85a0) (1) Data frame handling\nI0505 22:03:46.139337 3226 log.go:172] (0xc0006d85a0) (1) Data frame sent\nI0505 22:03:46.139345 3226 log.go:172] (0xc000b29340) (0xc0006d85a0) Stream removed, broadcasting: 1\nI0505 22:03:46.139745 3226 log.go:172] (0xc000b29340) (0xc0006d85a0) Stream removed, broadcasting: 1\nI0505 22:03:46.139758 3226 log.go:172] (0xc000b29340) (0xc000699ea0) Stream removed, broadcasting: 3\nI0505 22:03:46.139970 3226 log.go:172] (0xc000b29340) Go away received\nI0505 22:03:46.140258 3226 log.go:172] (0xc000b29340) (0xc000449cc0) Stream removed, broadcasting: 5\n" May 5 22:03:46.146: INFO: stdout: "" May 5 22:03:46.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1695 execpod85wt7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31458' May 5 22:03:46.488: INFO: stderr: "I0505 22:03:46.341779 3239 log.go:172] (0xc00086f550) (0xc000a2a3c0) Create stream\nI0505 22:03:46.341886 3239 log.go:172] (0xc00086f550) (0xc000a2a3c0) Stream added, broadcasting: 1\nI0505 22:03:46.344123 3239 log.go:172] (0xc00086f550) Reply frame received for 1\nI0505 22:03:46.344155 3239 log.go:172] (0xc00086f550) (0xc000862460) Create stream\nI0505 22:03:46.344165 3239 log.go:172] (0xc00086f550) (0xc000862460) Stream added, broadcasting: 3\nI0505 22:03:46.345045 3239 log.go:172] (0xc00086f550) Reply frame received for 3\nI0505 22:03:46.345061 3239 log.go:172] (0xc00086f550) (0xc000862500) Create stream\nI0505 22:03:46.345069 3239 log.go:172] (0xc00086f550) (0xc000862500) Stream added, broadcasting: 5\nI0505 22:03:46.346298 3239 log.go:172] (0xc00086f550) Reply frame received for 5\nI0505 22:03:46.483932 3239 log.go:172] (0xc00086f550) Data frame received for 5\nI0505 22:03:46.483960 3239 log.go:172] (0xc000862500) (5) Data frame handling\nI0505 22:03:46.483970 3239 log.go:172] (0xc000862500) (5) Data frame sent\nI0505 22:03:46.483977 3239 log.go:172] (0xc00086f550) Data frame received for 5\nI0505 22:03:46.483983 3239 log.go:172] (0xc000862500) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31458\nConnection to 172.17.0.10 31458 port [tcp/31458] succeeded!\nI0505 22:03:46.483998 3239 log.go:172] (0xc000862500) (5) Data frame sent\nI0505 22:03:46.484004 3239 log.go:172] (0xc00086f550) Data frame received for 5\nI0505 22:03:46.484017 3239 log.go:172] (0xc000862500) (5) Data frame handling\nI0505 22:03:46.484053 3239 log.go:172] (0xc00086f550) Data frame received for 3\nI0505 22:03:46.484070 3239 log.go:172] (0xc000862460) (3) Data frame handling\nI0505 22:03:46.484403 3239 log.go:172] (0xc00086f550) Data frame received for 1\nI0505 22:03:46.484456 3239 log.go:172] (0xc000a2a3c0) (1) Data frame handling\nI0505 22:03:46.484493 3239 log.go:172] (0xc000a2a3c0) (1) Data frame sent\nI0505 22:03:46.484867 3239 log.go:172] (0xc00086f550) (0xc000a2a3c0) Stream removed, broadcasting: 1\nI0505 22:03:46.485140 3239 log.go:172] (0xc00086f550) (0xc000a2a3c0) Stream removed, broadcasting: 1\nI0505 22:03:46.485154 3239 log.go:172] (0xc00086f550) (0xc000862460) Stream removed, broadcasting: 3\nI0505 22:03:46.485395 3239 log.go:172] (0xc00086f550) Go away received\nI0505 22:03:46.485713 3239 log.go:172] (0xc00086f550) (0xc000862500) Stream removed, broadcasting: 5\n" May 5 22:03:46.488: INFO: stdout: "" May 5 22:03:46.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1695 execpod85wt7 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31458' May 5 22:03:46.834: INFO: stderr: "I0505 22:03:46.664825 3250 log.go:172] (0xc00047a210) (0xc000441cc0) Create stream\nI0505 22:03:46.664875 3250 log.go:172] (0xc00047a210) (0xc000441cc0) Stream added, broadcasting: 1\nI0505 22:03:46.668957 3250 log.go:172] (0xc00047a210) Reply frame received for 1\nI0505 22:03:46.668993 3250 log.go:172] (0xc00047a210) (0xc000844000) Create stream\nI0505 22:03:46.669003 3250 log.go:172] (0xc00047a210) (0xc000844000) Stream added, broadcasting: 3\nI0505 22:03:46.669896 3250 log.go:172] (0xc00047a210) Reply frame received for 3\nI0505 22:03:46.669928 3250 log.go:172] (0xc00047a210) (0xc0008440a0) Create stream\nI0505 22:03:46.669937 3250 log.go:172] (0xc00047a210) (0xc0008440a0) Stream added, broadcasting: 5\nI0505 22:03:46.671622 3250 log.go:172] (0xc00047a210) Reply frame received for 5\nI0505 22:03:46.802111 3250 log.go:172] (0xc00047a210) Data frame received for 5\nI0505 22:03:46.802135 3250 log.go:172] (0xc0008440a0) (5) Data frame handling\nI0505 22:03:46.802143 3250 log.go:172] (0xc0008440a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31458\nI0505 22:03:46.828580 3250 log.go:172] (0xc00047a210) Data frame received for 5\nI0505 22:03:46.828604 3250 log.go:172] (0xc0008440a0) (5) Data frame handling\nI0505 22:03:46.828615 3250 log.go:172] (0xc0008440a0) (5) Data frame sent\nConnection to 172.17.0.8 31458 port [tcp/31458] succeeded!\nI0505 22:03:46.828858 3250 log.go:172] (0xc00047a210) Data frame received for 5\nI0505 22:03:46.828901 3250 log.go:172] (0xc0008440a0) (5) Data frame handling\nI0505 22:03:46.828961 3250 log.go:172] (0xc00047a210) Data frame received for 3\nI0505 22:03:46.828994 3250 log.go:172] (0xc000844000) (3) Data frame handling\nI0505 22:03:46.830842 3250 log.go:172] (0xc00047a210) Data frame received for 1\nI0505 22:03:46.831234 3250 log.go:172] (0xc000441cc0) (1) Data frame handling\nI0505 22:03:46.831278 3250 log.go:172] (0xc000441cc0) (1) Data frame sent\nI0505 22:03:46.831321 3250 log.go:172] (0xc00047a210) (0xc000441cc0) Stream removed, broadcasting: 1\nI0505 22:03:46.831543 3250 log.go:172] (0xc00047a210) Go away received\nI0505 22:03:46.831672 3250 log.go:172] (0xc00047a210) (0xc000441cc0) Stream removed, broadcasting: 1\nI0505 22:03:46.831713 3250 log.go:172] (0xc00047a210) (0xc000844000) Stream removed, broadcasting: 3\nI0505 22:03:46.831745 3250 log.go:172] (0xc00047a210) (0xc0008440a0) Stream removed, broadcasting: 5\n" May 5 22:03:46.834: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:03:46.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1695" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.068 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":198,"skipped":3114,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:03:46.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:03:46.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6" in namespace "downward-api-3504" to be "success or failure" May 5 22:03:46.912: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.625082ms May 5 22:03:48.917: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007231498s May 5 22:03:50.921: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010637033s May 5 22:03:54.015: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 7.104833919s May 5 22:03:56.034: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 9.123985663s May 5 22:03:58.656: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.746010628s May 5 22:04:00.659: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.749006813s STEP: Saw pod success May 5 22:04:00.659: INFO: Pod "downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6" satisfied condition "success or failure" May 5 22:04:00.662: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6 container client-container: STEP: delete the pod May 5 22:04:00.986: INFO: Waiting for pod downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6 to disappear May 5 22:04:01.037: INFO: Pod downwardapi-volume-0d741220-15db-49d5-89a8-7e9bd34a15b6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:04:01.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3504" for this suite. • [SLOW TEST:14.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3125,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:04:01.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:04:01.476: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:04:03.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6666" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":200,"skipped":3128,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:04:03.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3047.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3047.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:04:23.770: INFO: DNS probes using dns-3047/dns-test-47ac3dc4-d5e9-4093-bd39-de62c75d3e49 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:04:23.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3047" for this suite. • [SLOW TEST:20.247 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":201,"skipped":3145,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:04:23.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-9880 STEP: creating a selector STEP: Creating the service pods in kubernetes May 5 22:04:24.672: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 5 22:04:57.942: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.19:8080/dial?request=hostname&protocol=udp&host=10.244.1.18&port=8081&tries=1'] Namespace:pod-network-test-9880 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:04:57.942: INFO: >>> kubeConfig: /root/.kube/config I0505 22:04:57.979033 7 log.go:172] (0xc002a24840) (0xc001f0f680) Create stream I0505 22:04:57.979081 7 log.go:172] (0xc002a24840) (0xc001f0f680) Stream added, broadcasting: 1 I0505 22:04:57.980796 7 log.go:172] (0xc002a24840) Reply frame received for 1 I0505 22:04:57.980843 7 log.go:172] (0xc002a24840) (0xc00104e3c0) Create stream I0505 22:04:57.980856 7 log.go:172] (0xc002a24840) (0xc00104e3c0) Stream added, broadcasting: 3 I0505 22:04:57.981958 7 log.go:172] (0xc002a24840) Reply frame received for 3 I0505 22:04:57.982005 7 log.go:172] (0xc002a24840) (0xc001f0f7c0) Create stream I0505 22:04:57.982022 7 log.go:172] (0xc002a24840) (0xc001f0f7c0) Stream added, broadcasting: 5 I0505 22:04:57.982880 7 log.go:172] (0xc002a24840) Reply frame received for 5 I0505 22:04:58.115172 7 log.go:172] (0xc002a24840) Data frame received for 3 I0505 22:04:58.115221 7 log.go:172] (0xc00104e3c0) (3) Data frame handling I0505 22:04:58.115244 7 log.go:172] (0xc00104e3c0) (3) Data frame sent I0505 22:04:58.116140 7 log.go:172] (0xc002a24840) Data frame received for 3 I0505 22:04:58.116170 7 log.go:172] (0xc00104e3c0) (3) Data frame handling I0505 22:04:58.116443 7 log.go:172] (0xc002a24840) Data frame received for 5 I0505 22:04:58.116476 7 log.go:172] (0xc001f0f7c0) (5) Data frame handling I0505 22:04:58.118150 7 log.go:172] (0xc002a24840) Data frame received for 1 I0505 22:04:58.118173 7 log.go:172] (0xc001f0f680) (1) Data frame handling I0505 22:04:58.118191 7 log.go:172] (0xc001f0f680) (1) Data frame sent I0505 22:04:58.118314 7 log.go:172] (0xc002a24840) (0xc001f0f680) Stream removed, broadcasting: 1 I0505 22:04:58.118403 7 log.go:172] (0xc002a24840) Go away received I0505 22:04:58.118578 7 log.go:172] (0xc002a24840) (0xc001f0f680) Stream removed, broadcasting: 1 I0505 22:04:58.118618 7 log.go:172] (0xc002a24840) (0xc00104e3c0) Stream removed, broadcasting: 3 I0505 22:04:58.118638 7 log.go:172] (0xc002a24840) (0xc001f0f7c0) Stream removed, broadcasting: 5 May 5 22:04:58.118: INFO: Waiting for responses: map[] May 5 22:04:58.122: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.19:8080/dial?request=hostname&protocol=udp&host=10.244.2.194&port=8081&tries=1'] Namespace:pod-network-test-9880 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 5 22:04:58.122: INFO: >>> kubeConfig: /root/.kube/config I0505 22:04:58.156744 7 log.go:172] (0xc00159a370) (0xc0013dd860) Create stream I0505 22:04:58.156776 7 log.go:172] (0xc00159a370) (0xc0013dd860) Stream added, broadcasting: 1 I0505 22:04:58.158727 7 log.go:172] (0xc00159a370) Reply frame received for 1 I0505 22:04:58.158762 7 log.go:172] (0xc00159a370) (0xc00104e780) Create stream I0505 22:04:58.158775 7 log.go:172] (0xc00159a370) (0xc00104e780) Stream added, broadcasting: 3 I0505 22:04:58.159648 7 log.go:172] (0xc00159a370) Reply frame received for 3 I0505 22:04:58.159684 7 log.go:172] (0xc00159a370) (0xc001e4e640) Create stream I0505 22:04:58.159699 7 log.go:172] (0xc00159a370) (0xc001e4e640) Stream added, broadcasting: 5 I0505 22:04:58.160606 7 log.go:172] (0xc00159a370) Reply frame received for 5 I0505 22:04:58.215900 7 log.go:172] (0xc00159a370) Data frame received for 3 I0505 22:04:58.215925 7 log.go:172] (0xc00104e780) (3) Data frame handling I0505 22:04:58.215937 7 log.go:172] (0xc00104e780) (3) Data frame sent I0505 22:04:58.216406 7 log.go:172] (0xc00159a370) Data frame received for 3 I0505 22:04:58.216440 7 log.go:172] (0xc00104e780) (3) Data frame handling I0505 22:04:58.216465 7 log.go:172] (0xc00159a370) Data frame received for 5 I0505 22:04:58.216478 7 log.go:172] (0xc001e4e640) (5) Data frame handling I0505 22:04:58.218008 7 log.go:172] (0xc00159a370) Data frame received for 1 I0505 22:04:58.218034 7 log.go:172] (0xc0013dd860) (1) Data frame handling I0505 22:04:58.218058 7 log.go:172] (0xc0013dd860) (1) Data frame sent I0505 22:04:58.218082 7 log.go:172] (0xc00159a370) (0xc0013dd860) Stream removed, broadcasting: 1 I0505 22:04:58.218123 7 log.go:172] (0xc00159a370) Go away received I0505 22:04:58.218202 7 log.go:172] (0xc00159a370) (0xc0013dd860) Stream removed, broadcasting: 1 I0505 22:04:58.218226 7 log.go:172] (0xc00159a370) (0xc00104e780) Stream removed, broadcasting: 3 I0505 22:04:58.218244 7 log.go:172] (0xc00159a370) (0xc001e4e640) Stream removed, broadcasting: 5 May 5 22:04:58.218: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:04:58.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9880" for this suite. • [SLOW TEST:34.363 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3150,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:04:58.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-35a4a08c-64f0-4608-b334-837e36925579 STEP: Creating a pod to test consume configMaps May 5 22:04:58.355: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429" in namespace "projected-4399" to be "success or failure" May 5 22:04:58.363: INFO: Pod "pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429": Phase="Pending", Reason="", readiness=false. Elapsed: 7.164211ms May 5 22:05:00.367: INFO: Pod "pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011088224s May 5 22:05:02.370: INFO: Pod "pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01489618s STEP: Saw pod success May 5 22:05:02.370: INFO: Pod "pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429" satisfied condition "success or failure" May 5 22:05:02.373: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429 container projected-configmap-volume-test: STEP: delete the pod May 5 22:05:02.424: INFO: Waiting for pod pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429 to disappear May 5 22:05:02.751: INFO: Pod pod-projected-configmaps-d50b96be-a969-4d76-bfde-004930700429 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:02.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4399" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3171,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:02.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:05:03.172: INFO: Waiting up to 5m0s for pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f" in namespace "security-context-test-4775" to be "success or failure" May 5 22:05:03.190: INFO: Pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.986065ms May 5 22:05:05.242: INFO: Pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070395059s May 5 22:05:07.246: INFO: Pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073929258s May 5 22:05:09.249: INFO: Pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077625393s May 5 22:05:09.250: INFO: Pod "busybox-user-65534-96bb0e26-7a7a-460e-a25f-b1b2a412355f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:09.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4775" for this suite. • [SLOW TEST:6.465 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3175,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:09.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1097 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1097 I0505 22:05:09.675421 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1097, replica count: 2 I0505 22:05:12.725935 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:05:15.726142 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 22:05:15.726: INFO: Creating new exec pod May 5 22:05:22.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1097 execpodgmxtr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 5 22:05:23.081: INFO: stderr: "I0505 22:05:22.950104 3266 log.go:172] (0xc000a8d340) (0xc0009f8640) Create stream\nI0505 22:05:22.950166 3266 log.go:172] (0xc000a8d340) (0xc0009f8640) Stream added, broadcasting: 1\nI0505 22:05:22.954400 3266 log.go:172] (0xc000a8d340) Reply frame received for 1\nI0505 22:05:22.954474 3266 log.go:172] (0xc000a8d340) (0xc0005c46e0) Create stream\nI0505 22:05:22.954501 3266 log.go:172] (0xc000a8d340) (0xc0005c46e0) Stream added, broadcasting: 3\nI0505 22:05:22.955501 3266 log.go:172] (0xc000a8d340) Reply frame received for 3\nI0505 22:05:22.955524 3266 log.go:172] (0xc000a8d340) (0xc0007854a0) Create stream\nI0505 22:05:22.955532 3266 log.go:172] (0xc000a8d340) (0xc0007854a0) Stream added, broadcasting: 5\nI0505 22:05:22.956372 3266 log.go:172] (0xc000a8d340) Reply frame received for 5\nI0505 22:05:23.038435 3266 log.go:172] (0xc000a8d340) Data frame received for 5\nI0505 22:05:23.038470 3266 log.go:172] (0xc0007854a0) (5) Data frame handling\nI0505 22:05:23.038496 3266 log.go:172] (0xc0007854a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0505 22:05:23.073695 3266 log.go:172] (0xc000a8d340) Data frame received for 5\nI0505 22:05:23.073729 3266 log.go:172] (0xc0007854a0) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0505 22:05:23.073752 3266 log.go:172] (0xc000a8d340) Data frame received for 3\nI0505 22:05:23.073777 3266 log.go:172] (0xc0005c46e0) (3) Data frame handling\nI0505 22:05:23.073802 3266 log.go:172] (0xc0007854a0) (5) Data frame sent\nI0505 22:05:23.073814 3266 log.go:172] (0xc000a8d340) Data frame received for 5\nI0505 22:05:23.073825 3266 log.go:172] (0xc0007854a0) (5) Data frame handling\nI0505 22:05:23.075702 3266 log.go:172] (0xc000a8d340) Data frame received for 1\nI0505 22:05:23.075726 3266 log.go:172] (0xc0009f8640) (1) Data frame handling\nI0505 22:05:23.075739 3266 log.go:172] (0xc0009f8640) (1) Data frame sent\nI0505 22:05:23.075755 3266 log.go:172] (0xc000a8d340) (0xc0009f8640) Stream removed, broadcasting: 1\nI0505 22:05:23.075781 3266 log.go:172] (0xc000a8d340) Go away received\nI0505 22:05:23.076205 3266 log.go:172] (0xc000a8d340) (0xc0009f8640) Stream removed, broadcasting: 1\nI0505 22:05:23.076228 3266 log.go:172] (0xc000a8d340) (0xc0005c46e0) Stream removed, broadcasting: 3\nI0505 22:05:23.076247 3266 log.go:172] (0xc000a8d340) (0xc0007854a0) Stream removed, broadcasting: 5\n" May 5 22:05:23.081: INFO: stdout: "" May 5 22:05:23.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1097 execpodgmxtr -- /bin/sh -x -c nc -zv -t -w 2 10.105.192.200 80' May 5 22:05:23.268: INFO: stderr: "I0505 22:05:23.197320 3286 log.go:172] (0xc000a39290) (0xc000a30280) Create stream\nI0505 22:05:23.197383 3286 log.go:172] (0xc000a39290) (0xc000a30280) Stream added, broadcasting: 1\nI0505 22:05:23.199694 3286 log.go:172] (0xc000a39290) Reply frame received for 1\nI0505 22:05:23.199729 3286 log.go:172] (0xc000a39290) (0xc000a30320) Create stream\nI0505 22:05:23.199744 3286 log.go:172] (0xc000a39290) (0xc000a30320) Stream added, broadcasting: 3\nI0505 22:05:23.200669 3286 log.go:172] (0xc000a39290) Reply frame received for 3\nI0505 22:05:23.200724 3286 log.go:172] (0xc000a39290) (0xc000a303c0) Create stream\nI0505 22:05:23.200745 3286 log.go:172] (0xc000a39290) (0xc000a303c0) Stream added, broadcasting: 5\nI0505 22:05:23.201813 3286 log.go:172] (0xc000a39290) Reply frame received for 5\nI0505 22:05:23.259549 3286 log.go:172] (0xc000a39290) Data frame received for 5\nI0505 22:05:23.259576 3286 log.go:172] (0xc000a303c0) (5) Data frame handling\nI0505 22:05:23.259592 3286 log.go:172] (0xc000a303c0) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.192.200 80\nI0505 22:05:23.260138 3286 log.go:172] (0xc000a39290) Data frame received for 5\nI0505 22:05:23.260160 3286 log.go:172] (0xc000a303c0) (5) Data frame handling\nI0505 22:05:23.260180 3286 log.go:172] (0xc000a303c0) (5) Data frame sent\nConnection to 10.105.192.200 80 port [tcp/http] succeeded!\nI0505 22:05:23.260506 3286 log.go:172] (0xc000a39290) Data frame received for 5\nI0505 22:05:23.260537 3286 log.go:172] (0xc000a303c0) (5) Data frame handling\nI0505 22:05:23.260711 3286 log.go:172] (0xc000a39290) Data frame received for 3\nI0505 22:05:23.260728 3286 log.go:172] (0xc000a30320) (3) Data frame handling\nI0505 22:05:23.262468 3286 log.go:172] (0xc000a39290) Data frame received for 1\nI0505 22:05:23.262548 3286 log.go:172] (0xc000a30280) (1) Data frame handling\nI0505 22:05:23.262620 3286 log.go:172] (0xc000a30280) (1) Data frame sent\nI0505 22:05:23.262684 3286 log.go:172] (0xc000a39290) (0xc000a30280) Stream removed, broadcasting: 1\nI0505 22:05:23.262711 3286 log.go:172] (0xc000a39290) Go away received\nI0505 22:05:23.263228 3286 log.go:172] (0xc000a39290) (0xc000a30280) Stream removed, broadcasting: 1\nI0505 22:05:23.263251 3286 log.go:172] (0xc000a39290) (0xc000a30320) Stream removed, broadcasting: 3\nI0505 22:05:23.263264 3286 log.go:172] (0xc000a39290) (0xc000a303c0) Stream removed, broadcasting: 5\n" May 5 22:05:23.268: INFO: stdout: "" May 5 22:05:23.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1097 execpodgmxtr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32072' May 5 22:05:23.462: INFO: stderr: "I0505 22:05:23.386903 3305 log.go:172] (0xc0001114a0) (0xc000699cc0) Create stream\nI0505 22:05:23.386949 3305 log.go:172] (0xc0001114a0) (0xc000699cc0) Stream added, broadcasting: 1\nI0505 22:05:23.389065 3305 log.go:172] (0xc0001114a0) Reply frame received for 1\nI0505 22:05:23.389089 3305 log.go:172] (0xc0001114a0) (0xc00091a000) Create stream\nI0505 22:05:23.389097 3305 log.go:172] (0xc0001114a0) (0xc00091a000) Stream added, broadcasting: 3\nI0505 22:05:23.390586 3305 log.go:172] (0xc0001114a0) Reply frame received for 3\nI0505 22:05:23.390649 3305 log.go:172] (0xc0001114a0) (0xc000699d60) Create stream\nI0505 22:05:23.390673 3305 log.go:172] (0xc0001114a0) (0xc000699d60) Stream added, broadcasting: 5\nI0505 22:05:23.391787 3305 log.go:172] (0xc0001114a0) Reply frame received for 5\nI0505 22:05:23.453325 3305 log.go:172] (0xc0001114a0) Data frame received for 5\nI0505 22:05:23.453382 3305 log.go:172] (0xc000699d60) (5) Data frame handling\nI0505 22:05:23.453410 3305 log.go:172] (0xc000699d60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32072\nI0505 22:05:23.453549 3305 log.go:172] (0xc0001114a0) Data frame received for 5\nI0505 22:05:23.453567 3305 log.go:172] (0xc000699d60) (5) Data frame handling\nI0505 22:05:23.453577 3305 log.go:172] (0xc000699d60) (5) Data frame sent\nConnection to 172.17.0.10 32072 port [tcp/32072] succeeded!\nI0505 22:05:23.454030 3305 log.go:172] (0xc0001114a0) Data frame received for 5\nI0505 22:05:23.454053 3305 log.go:172] (0xc000699d60) (5) Data frame handling\nI0505 22:05:23.454076 3305 log.go:172] (0xc0001114a0) Data frame received for 3\nI0505 22:05:23.454090 3305 log.go:172] (0xc00091a000) (3) Data frame handling\nI0505 22:05:23.456342 3305 log.go:172] (0xc0001114a0) Data frame received for 1\nI0505 22:05:23.456380 3305 log.go:172] (0xc000699cc0) (1) Data frame handling\nI0505 22:05:23.456412 3305 log.go:172] (0xc000699cc0) (1) Data frame sent\nI0505 22:05:23.456436 3305 log.go:172] (0xc0001114a0) (0xc000699cc0) Stream removed, broadcasting: 1\nI0505 22:05:23.456468 3305 log.go:172] (0xc0001114a0) Go away received\nI0505 22:05:23.456892 3305 log.go:172] (0xc0001114a0) (0xc000699cc0) Stream removed, broadcasting: 1\nI0505 22:05:23.456923 3305 log.go:172] (0xc0001114a0) (0xc00091a000) Stream removed, broadcasting: 3\nI0505 22:05:23.456936 3305 log.go:172] (0xc0001114a0) (0xc000699d60) Stream removed, broadcasting: 5\n" May 5 22:05:23.462: INFO: stdout: "" May 5 22:05:23.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1097 execpodgmxtr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32072' May 5 22:05:23.665: INFO: stderr: "I0505 22:05:23.591863 3328 log.go:172] (0xc0009469a0) (0xc00075c000) Create stream\nI0505 22:05:23.591926 3328 log.go:172] (0xc0009469a0) (0xc00075c000) Stream added, broadcasting: 1\nI0505 22:05:23.594522 3328 log.go:172] (0xc0009469a0) Reply frame received for 1\nI0505 22:05:23.594571 3328 log.go:172] (0xc0009469a0) (0xc000a24000) Create stream\nI0505 22:05:23.594586 3328 log.go:172] (0xc0009469a0) (0xc000a24000) Stream added, broadcasting: 3\nI0505 22:05:23.595534 3328 log.go:172] (0xc0009469a0) Reply frame received for 3\nI0505 22:05:23.595583 3328 log.go:172] (0xc0009469a0) (0xc00075c140) Create stream\nI0505 22:05:23.595607 3328 log.go:172] (0xc0009469a0) (0xc00075c140) Stream added, broadcasting: 5\nI0505 22:05:23.596527 3328 log.go:172] (0xc0009469a0) Reply frame received for 5\nI0505 22:05:23.657014 3328 log.go:172] (0xc0009469a0) Data frame received for 5\nI0505 22:05:23.657082 3328 log.go:172] (0xc00075c140) (5) Data frame handling\nI0505 22:05:23.657311 3328 log.go:172] (0xc00075c140) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32072\nConnection to 172.17.0.8 32072 port [tcp/32072] succeeded!\nI0505 22:05:23.657385 3328 log.go:172] (0xc0009469a0) Data frame received for 5\nI0505 22:05:23.657408 3328 log.go:172] (0xc00075c140) (5) Data frame handling\nI0505 22:05:23.657485 3328 log.go:172] (0xc0009469a0) Data frame received for 3\nI0505 22:05:23.657526 3328 log.go:172] (0xc000a24000) (3) Data frame handling\nI0505 22:05:23.659546 3328 log.go:172] (0xc0009469a0) Data frame received for 1\nI0505 22:05:23.659588 3328 log.go:172] (0xc00075c000) (1) Data frame handling\nI0505 22:05:23.659612 3328 log.go:172] (0xc00075c000) (1) Data frame sent\nI0505 22:05:23.659653 3328 log.go:172] (0xc0009469a0) (0xc00075c000) Stream removed, broadcasting: 1\nI0505 22:05:23.659702 3328 log.go:172] (0xc0009469a0) Go away received\nI0505 22:05:23.660092 3328 log.go:172] (0xc0009469a0) (0xc00075c000) Stream removed, broadcasting: 1\nI0505 22:05:23.660112 3328 log.go:172] (0xc0009469a0) (0xc000a24000) Stream removed, broadcasting: 3\nI0505 22:05:23.660122 3328 log.go:172] (0xc0009469a0) (0xc00075c140) Stream removed, broadcasting: 5\n" May 5 22:05:23.665: INFO: stdout: "" May 5 22:05:23.666: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:23.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1097" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.554 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":205,"skipped":3187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:23.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 5 22:05:30.693: INFO: &Pod{ObjectMeta:{send-events-a2541d8a-727c-4a73-851e-91c873edc5c3 events-7625 /api/v1/namespaces/events-7625/pods/send-events-a2541d8a-727c-4a73-851e-91c873edc5c3 de9f6ca7-2ee5-4f46-8452-19f0306959db 13689662 0 2020-05-05 22:05:23 +0000 UTC map[name:foo time:947949632] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xw6sx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xw6sx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xw6sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:05:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:05:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:05:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:05:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.23,StartTime:2020-05-05 22:05:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:05:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://47872156bc80f0f4b2cc27b309f6b3e0bbd132644e7e29a52b4542607056a476,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.23,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 5 22:05:32.830: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 5 22:05:34.833: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:34.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7625" for this suite. • [SLOW TEST:11.078 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":206,"skipped":3211,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:34.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a4acd8fb-c8d8-4650-acdf-9b6188c6ce6f STEP: Creating configMap with name cm-test-opt-upd-da28c001-9f41-4205-923a-b980d7f1bfa2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a4acd8fb-c8d8-4650-acdf-9b6188c6ce6f STEP: Updating configmap cm-test-opt-upd-da28c001-9f41-4205-923a-b980d7f1bfa2 STEP: Creating configMap with name cm-test-opt-create-75dda371-bd87-4fd0-ab9b-81211476d60f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:47.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3084" for this suite. • [SLOW TEST:12.241 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3218,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:47.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 5 22:05:47.410: INFO: Waiting up to 5m0s for pod "pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a" in namespace "emptydir-7627" to be "success or failure" May 5 22:05:47.412: INFO: Pod "pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704587ms May 5 22:05:49.415: INFO: Pod "pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005353893s May 5 22:05:51.419: INFO: Pod "pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00882235s STEP: Saw pod success May 5 22:05:51.419: INFO: Pod "pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a" satisfied condition "success or failure" May 5 22:05:51.421: INFO: Trying to get logs from node jerma-worker pod pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a container test-container: STEP: delete the pod May 5 22:05:51.453: INFO: Waiting for pod pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a to disappear May 5 22:05:51.493: INFO: Pod pod-a6d2123d-3bb8-493f-9f32-4fd1bc9d1d8a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:51.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7627" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:51.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 22:05:56.333: INFO: Successfully updated pod "labelsupdateba88b733-c381-4590-9cc6-fc460ba10c44" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:05:58.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5791" for this suite. • [SLOW TEST:6.852 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:05:58.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:06:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9529" for this suite. • [SLOW TEST:5.474 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3304,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:06:03.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:06:04.695: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9" in namespace "downward-api-1807" to be "success or failure" May 5 22:06:04.835: INFO: Pod "downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9": Phase="Pending", Reason="", readiness=false. Elapsed: 140.203251ms May 5 22:06:07.010: INFO: Pod "downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31523184s May 5 22:06:09.031: INFO: Pod "downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.336061574s STEP: Saw pod success May 5 22:06:09.031: INFO: Pod "downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9" satisfied condition "success or failure" May 5 22:06:09.035: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9 container client-container: STEP: delete the pod May 5 22:06:09.103: INFO: Waiting for pod downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9 to disappear May 5 22:06:09.125: INFO: Pod downwardapi-volume-10942160-134f-4df2-a7dd-ebfde836abd9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:06:09.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1807" for this suite. • [SLOW TEST:5.340 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:06:09.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:06:09.396: INFO: Creating deployment "webserver-deployment" May 5 22:06:09.419: INFO: Waiting for observed generation 1 May 5 22:06:11.656: INFO: Waiting for all required pods to come up May 5 22:06:11.662: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 5 22:06:23.670: INFO: Waiting for deployment "webserver-deployment" to complete May 5 22:06:23.675: INFO: Updating deployment "webserver-deployment" with a non-existent image May 5 22:06:23.680: INFO: Updating deployment webserver-deployment May 5 22:06:23.680: INFO: Waiting for observed generation 2 May 5 22:06:25.996: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 5 22:06:26.013: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 5 22:06:26.017: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 5 22:06:26.024: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 5 22:06:26.024: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 5 22:06:26.026: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 5 22:06:26.030: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 5 22:06:26.030: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 5 22:06:26.035: INFO: Updating deployment webserver-deployment May 5 22:06:26.035: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 5 22:06:26.965: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 5 22:06:27.002: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 22:06:29.403: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9983 /apis/apps/v1/namespaces/deployment-9983/deployments/webserver-deployment e9ffaeb2-2100-4b09-b2eb-cb5286d3bef0 13690245 3 2020-05-05 22:06:09 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034ca4f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-05 22:06:26 +0000 UTC,LastTransitionTime:2020-05-05 22:06:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-05 22:06:27 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 5 22:06:29.406: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9983 /apis/apps/v1/namespaces/deployment-9983/replicasets/webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 13690236 3 2020-05-05 22:06:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e9ffaeb2-2100-4b09-b2eb-cb5286d3bef0 0xc0016fe4f7 0xc0016fe4f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0016fe598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 22:06:29.406: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 5 22:06:29.406: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9983 /apis/apps/v1/namespaces/deployment-9983/replicasets/webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 13690227 3 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e9ffaeb2-2100-4b09-b2eb-cb5286d3bef0 0xc0016fe3f7 0xc0016fe3f8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0016fe458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 5 22:06:29.412: INFO: Pod "webserver-deployment-595b5b9587-2zzld" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2zzld webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-2zzld e2c0c874-2665-4059-b974-21f41eb4a5ea 13690250 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc003849e37 0xc003849e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.412: INFO: Pod "webserver-deployment-595b5b9587-52j7b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-52j7b webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-52j7b ebc94b3d-d2b2-4c4d-9971-70bb90a8a805 13690247 0 2020-05-05 22:06:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc003849fa7 0xc003849fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.413: INFO: Pod "webserver-deployment-595b5b9587-6kb9k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6kb9k webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-6kb9k 36facc4b-4cbd-4504-9c2c-a1769104e44b 13690093 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dc1c7 0xc0015dc1c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.29,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://601feda0abf6969a0d65595fed87fb69e2160ca20ab5f2960de886da426589dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.413: INFO: Pod "webserver-deployment-595b5b9587-7np89" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7np89 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-7np89 fdb17d76-a097-4f2f-b009-94267ce1992a 13690275 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dc487 0xc0015dc488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.413: INFO: Pod "webserver-deployment-595b5b9587-9fnnc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9fnnc webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-9fnnc f11bff76-5243-4fba-b26c-7fca487b7bd0 13690059 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dc727 0xc0015dc728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.201,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5af38a45a509d0dc9532862d34cffd16442bee7d67afb0e852f6242758a9426b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.413: INFO: Pod "webserver-deployment-595b5b9587-b4xqr" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b4xqr webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-b4xqr 61bbea71-1c25-4580-a0e6-33c74384a65d 13690074 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dc9d7 0xc0015dc9d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.27,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b7db045eb941514d875bca5dcd5da57e1fe57c0db42758b6322827eb20d694a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-fqplp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fqplp webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-fqplp eda9f7a8-4e20-437e-a604-1a969aff220a 13690258 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dcc67 0xc0015dcc68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-g2zz4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g2zz4 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-g2zz4 a3586635-6e98-475c-86c5-4014a1584739 13690299 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dcef7 0xc0015dcef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-g62rj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g62rj webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-g62rj 199ffe2d-da61-4964-ba73-cb3d527723b1 13690266 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dd0e7 0xc0015dd0e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-g8ghf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g8ghf webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-g8ghf ebbbc9b6-6c77-40fd-923c-fbfc51ceb07f 13690311 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dd7f7 0xc0015dd7f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-gflp9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gflp9 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-gflp9 9ef81f58-7786-497b-b948-9e173ef9693f 13690070 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015dd957 0xc0015dd958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.200,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fa064828f73aafe4d9720b1888cdb9e9246cb84b3be3a39158ed14504aa25647,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.414: INFO: Pod "webserver-deployment-595b5b9587-jhf54" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jhf54 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-jhf54 4d58f44f-386b-4e55-bcbc-96ac38891109 13690295 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015ddad7 0xc0015ddad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-lbv4q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lbv4q webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-lbv4q da26223d-a9b4-4116-b862-a2876a11e37a 13690238 0 2020-05-05 22:06:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015ddd67 0xc0015ddd68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-lvptj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lvptj webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-lvptj 39722dce-f03c-4b35-bcbb-9601d8a105dc 13690088 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0015ddf97 0xc0015ddf98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.204,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f0dfc4077a8d4ca15bdb35ab190f206dfcffaa6ce0bc2377744d5a70433fa8c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-n9x69" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n9x69 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-n9x69 5bfc8200-c140-4048-8e86-a0e630892e48 13690087 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0005702d7 0xc0005702d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.28,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://73f2d0803065b6cc6df870656dd051af2dc60be863c73625cdf829f358173085,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-nhlqn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nhlqn webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-nhlqn a1e07fcb-2bc6-4472-835c-ffb4fd68ae96 13690274 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc000057127 0xc000057128}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-tbvfr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tbvfr webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-tbvfr 6fa6186e-1eed-4d7d-9c0a-b44c1d0f5f91 13690232 0 2020-05-05 22:06:26 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0000578a7 0xc0000578a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.415: INFO: Pod "webserver-deployment-595b5b9587-vvbnb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vvbnb webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-vvbnb 89965c2f-d8e0-45eb-b3f5-aeee45c04cda 13690081 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc0006a1327 0xc0006a1328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.202,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://21cf56b67bb0a4da7554e5b6db55e7757fc2e5ff419a4c5238b59f6ac16c765c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-595b5b9587-wjfk6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wjfk6 webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-wjfk6 cc698401-d19f-475a-b18e-b68833a34826 13690045 0 2020-05-05 22:06:09 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc00032c6b7 0xc00032c6b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.26,StartTime:2020-05-05 22:06:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:06:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cf8554d096df6cb8d421b95cb775f459b7706881be4473c076e39c0144a0f2e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-595b5b9587-xcz4c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xcz4c webserver-deployment-595b5b9587- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-595b5b9587-xcz4c e022a381-1d3c-453f-9caa-3b8fdb6b35f2 13690303 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 70baacdb-9730-41c2-a0e3-cffa31a7a966 0xc000565b67 0xc000565b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-c7997dcc8-8vnjj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8vnjj webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-8vnjj 14031ce7-659c-4261-89b0-ad98b0bf702e 13690167 0 2020-05-05 22:06:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c067 0xc00387c068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-c7997dcc8-9gw2w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9gw2w webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-9gw2w 4a6ed160-f349-44d8-91ff-53c9987659d6 13690230 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c1e7 0xc00387c1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-c7997dcc8-hb69m" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hb69m webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-hb69m 20b88fbb-09c2-4611-afc8-bc9c45100d0b 13690241 0 2020-05-05 22:06:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c327 0xc00387c328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.416: INFO: Pod "webserver-deployment-c7997dcc8-lzjrb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lzjrb webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-lzjrb bcf6d7dc-b158-49a0-b3aa-f14c2e963941 13690283 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c4a7 0xc00387c4a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.417: INFO: Pod "webserver-deployment-c7997dcc8-nwfq5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nwfq5 webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-nwfq5 bb1785ce-35ff-4856-be85-4039daca51ed 13690306 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c627 0xc00387c628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.417: INFO: Pod "webserver-deployment-c7997dcc8-pxx84" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pxx84 webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-pxx84 763000ec-0610-4127-aeb6-9e7140bbcd49 13690133 0 2020-05-05 22:06:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c7a7 0xc00387c7a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.417: INFO: Pod "webserver-deployment-c7997dcc8-qtfjz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qtfjz webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-qtfjz c302c060-dde3-4d38-92d8-4d133b8f0478 13690267 0 2020-05-05 22:06:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387c927 0xc00387c928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.205,StartTime:2020-05-05 22:06:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.205,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.417: INFO: Pod "webserver-deployment-c7997dcc8-s26lr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s26lr webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-s26lr 34c6cd38-3cde-4082-8eff-942050187145 13690293 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387cad7 0xc00387cad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.417: INFO: Pod "webserver-deployment-c7997dcc8-tf4cp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tf4cp webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-tf4cp 7a8b20a6-2c07-4b38-9401-639ead26db07 13690257 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387cc77 0xc00387cc78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.418: INFO: Pod "webserver-deployment-c7997dcc8-tq78n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tq78n webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-tq78n a6809def-982e-4822-8a3c-cc502dddaf21 13690140 0 2020-05-05 22:06:23 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387cdf7 0xc00387cdf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.418: INFO: Pod "webserver-deployment-c7997dcc8-v7l4f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v7l4f webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-v7l4f 19d0ef5c-e880-49d9-895f-5a9b61f25248 13690310 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387cf77 0xc00387cf78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.418: INFO: Pod "webserver-deployment-c7997dcc8-z25bt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z25bt webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-z25bt c76c4680-14a4-4f5f-a216-5ff668ffd3c9 13690156 0 2020-05-05 22:06:24 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387d0f7 0xc00387d0f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:06:29.418: INFO: Pod "webserver-deployment-c7997dcc8-zvzlb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zvzlb webserver-deployment-c7997dcc8- deployment-9983 /api/v1/namespaces/deployment-9983/pods/webserver-deployment-c7997dcc8-zvzlb a40b52e8-829f-454a-934e-abf4e9b8d8ac 13690286 0 2020-05-05 22:06:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 07cfb017-636a-489b-b1fd-c1f4048911b1 0xc00387d277 0xc00387d278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4gglj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4gglj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4gglj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:06:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-05 22:06:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:06:29.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9983" for this suite. • [SLOW TEST:20.257 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":212,"skipped":3334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:06:29.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-a1fff74e-353c-4b93-ba9d-1dc52306920c in namespace container-probe-8968 May 5 22:06:47.633: INFO: Started pod liveness-a1fff74e-353c-4b93-ba9d-1dc52306920c in namespace container-probe-8968 STEP: checking the pod's current state and verifying that restartCount is present May 5 22:06:47.722: INFO: Initial restart count of pod liveness-a1fff74e-353c-4b93-ba9d-1dc52306920c is 0 May 5 22:07:04.194: INFO: Restart count of pod container-probe-8968/liveness-a1fff74e-353c-4b93-ba9d-1dc52306920c is now 1 (16.472463365s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:04.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8968" for this suite. • [SLOW TEST:34.833 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3366,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:04.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 5 22:07:04.896: INFO: Waiting up to 5m0s for pod "pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53" in namespace "emptydir-6853" to be "success or failure" May 5 22:07:04.899: INFO: Pod "pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.765821ms May 5 22:07:06.902: INFO: Pod "pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005988981s May 5 22:07:08.906: INFO: Pod "pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010335088s STEP: Saw pod success May 5 22:07:08.906: INFO: Pod "pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53" satisfied condition "success or failure" May 5 22:07:08.908: INFO: Trying to get logs from node jerma-worker2 pod pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53 container test-container: STEP: delete the pod May 5 22:07:08.984: INFO: Waiting for pod pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53 to disappear May 5 22:07:09.002: INFO: Pod pod-1229dbb0-a7fe-4a4a-a8e0-15be71428e53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:09.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6853" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3386,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:09.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:07:10.392: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:07:12.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313230, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:07:15.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:16.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8077" for this suite. STEP: Destroying namespace "webhook-8077-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.138 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":215,"skipped":3401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:16.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:07:16.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8" in namespace "projected-7360" to be "success or failure" May 5 22:07:16.218: INFO: Pod "downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.356971ms May 5 22:07:18.222: INFO: Pod "downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020270552s May 5 22:07:20.226: INFO: Pod "downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02425481s STEP: Saw pod success May 5 22:07:20.226: INFO: Pod "downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8" satisfied condition "success or failure" May 5 22:07:20.229: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8 container client-container: STEP: delete the pod May 5 22:07:20.335: INFO: Waiting for pod downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8 to disappear May 5 22:07:20.375: INFO: Pod downwardapi-volume-2c0fc118-eb09-4e9b-a5aa-52eb1a6465b8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:20.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7360" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:20.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 5 22:07:20.461: INFO: Waiting up to 5m0s for pod "pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0" in namespace "emptydir-9108" to be "success or failure" May 5 22:07:20.464: INFO: Pod "pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.905578ms May 5 22:07:22.468: INFO: Pod "pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006362846s May 5 22:07:24.567: INFO: Pod "pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105318867s STEP: Saw pod success May 5 22:07:24.567: INFO: Pod "pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0" satisfied condition "success or failure" May 5 22:07:24.571: INFO: Trying to get logs from node jerma-worker2 pod pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0 container test-container: STEP: delete the pod May 5 22:07:24.736: INFO: Waiting for pod pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0 to disappear May 5 22:07:24.758: INFO: Pod pod-fdabd3d0-a3bb-48a2-b2f6-7103b5d569b0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:24.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9108" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3462,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:24.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:29.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4160" for this suite. • [SLOW TEST:5.233 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":218,"skipped":3474,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:29.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:07:30.849: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:07:32.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:07:34.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313250, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:07:37.939: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:37.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5752" for this suite. STEP: Destroying namespace "webhook-5752-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.158 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":219,"skipped":3492,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:38.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 5 22:07:45.261: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:07:46.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9639" for this suite. • [SLOW TEST:8.138 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":220,"skipped":3513,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:07:46.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-01739219-5893-41cf-8cae-a5243f2cb71f in namespace container-probe-4711 May 5 22:07:53.189: INFO: Started pod busybox-01739219-5893-41cf-8cae-a5243f2cb71f in namespace container-probe-4711 STEP: checking the pod's current state and verifying that restartCount is present May 5 22:07:53.191: INFO: Initial restart count of pod busybox-01739219-5893-41cf-8cae-a5243f2cb71f is 0 May 5 22:08:43.632: INFO: Restart count of pod container-probe-4711/busybox-01739219-5893-41cf-8cae-a5243f2cb71f is now 1 (50.440490312s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:08:43.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4711" for this suite. • [SLOW TEST:57.412 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3514,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:08:43.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 5 22:08:43.773: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-906 /api/v1/namespaces/watch-906/configmaps/e2e-watch-test-watch-closed aa1085ed-9d58-47c5-9496-84229d18fcba 13691336 0 2020-05-05 22:08:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 22:08:43.773: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-906 /api/v1/namespaces/watch-906/configmaps/e2e-watch-test-watch-closed aa1085ed-9d58-47c5-9496-84229d18fcba 13691337 0 2020-05-05 22:08:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 5 22:08:43.846: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-906 /api/v1/namespaces/watch-906/configmaps/e2e-watch-test-watch-closed aa1085ed-9d58-47c5-9496-84229d18fcba 13691338 0 2020-05-05 22:08:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 22:08:43.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-906 /api/v1/namespaces/watch-906/configmaps/e2e-watch-test-watch-closed aa1085ed-9d58-47c5-9496-84229d18fcba 13691339 0 2020-05-05 22:08:43 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:08:43.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-906" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":222,"skipped":3523,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:08:43.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-983 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-983 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-983 May 5 22:08:43.970: INFO: Found 0 stateful pods, waiting for 1 May 5 22:08:54.054: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 5 22:08:54.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 22:08:54.329: INFO: stderr: "I0505 22:08:54.186800 3352 log.go:172] (0xc000a58630) (0xc00065bb80) Create stream\nI0505 22:08:54.186857 3352 log.go:172] (0xc000a58630) (0xc00065bb80) Stream added, broadcasting: 1\nI0505 22:08:54.189475 3352 log.go:172] (0xc000a58630) Reply frame received for 1\nI0505 22:08:54.189509 3352 log.go:172] (0xc000a58630) (0xc00065bd60) Create stream\nI0505 22:08:54.189519 3352 log.go:172] (0xc000a58630) (0xc00065bd60) Stream added, broadcasting: 3\nI0505 22:08:54.190502 3352 log.go:172] (0xc000a58630) Reply frame received for 3\nI0505 22:08:54.190538 3352 log.go:172] (0xc000a58630) (0xc0008d2000) Create stream\nI0505 22:08:54.190548 3352 log.go:172] (0xc000a58630) (0xc0008d2000) Stream added, broadcasting: 5\nI0505 22:08:54.191559 3352 log.go:172] (0xc000a58630) Reply frame received for 5\nI0505 22:08:54.285299 3352 log.go:172] (0xc000a58630) Data frame received for 5\nI0505 22:08:54.285322 3352 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0505 22:08:54.285331 3352 log.go:172] (0xc0008d2000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 22:08:54.321077 3352 log.go:172] (0xc000a58630) Data frame received for 3\nI0505 22:08:54.321339 3352 log.go:172] (0xc00065bd60) (3) Data frame handling\nI0505 22:08:54.321372 3352 log.go:172] (0xc00065bd60) (3) Data frame sent\nI0505 22:08:54.321397 3352 log.go:172] (0xc000a58630) Data frame received for 3\nI0505 22:08:54.321427 3352 log.go:172] (0xc00065bd60) (3) Data frame handling\nI0505 22:08:54.321445 3352 log.go:172] (0xc000a58630) Data frame received for 5\nI0505 22:08:54.321457 3352 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0505 22:08:54.322975 3352 log.go:172] (0xc000a58630) Data frame received for 1\nI0505 22:08:54.322987 3352 log.go:172] (0xc00065bb80) (1) Data frame handling\nI0505 22:08:54.322998 3352 log.go:172] (0xc00065bb80) (1) Data frame sent\nI0505 22:08:54.323006 3352 log.go:172] (0xc000a58630) (0xc00065bb80) Stream removed, broadcasting: 1\nI0505 22:08:54.323299 3352 log.go:172] (0xc000a58630) (0xc00065bb80) Stream removed, broadcasting: 1\nI0505 22:08:54.323316 3352 log.go:172] (0xc000a58630) (0xc00065bd60) Stream removed, broadcasting: 3\nI0505 22:08:54.323360 3352 log.go:172] (0xc000a58630) Go away received\nI0505 22:08:54.323620 3352 log.go:172] (0xc000a58630) (0xc0008d2000) Stream removed, broadcasting: 5\n" May 5 22:08:54.329: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 22:08:54.329: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 22:08:54.332: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 5 22:09:04.339: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 22:09:04.339: INFO: Waiting for statefulset status.replicas updated to 0 May 5 22:09:04.353: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:04.353: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:55 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:04.353: INFO: May 5 22:09:04.353: INFO: StatefulSet ss has not reached scale 3, at 1 May 5 22:09:05.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992671657s May 5 22:09:06.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989011217s May 5 22:09:07.784: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.716588008s May 5 22:09:08.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.561796001s May 5 22:09:09.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.536753715s May 5 22:09:10.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.531016251s May 5 22:09:11.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.526343407s May 5 22:09:12.830: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.520789351s May 5 22:09:13.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 515.778367ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-983 May 5 22:09:14.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:15.026: INFO: stderr: "I0505 22:09:14.956280 3375 log.go:172] (0xc000104e70) (0xc0005b3d60) Create stream\nI0505 22:09:14.956323 3375 log.go:172] (0xc000104e70) (0xc0005b3d60) Stream added, broadcasting: 1\nI0505 22:09:14.958412 3375 log.go:172] (0xc000104e70) Reply frame received for 1\nI0505 22:09:14.958438 3375 log.go:172] (0xc000104e70) (0xc00098a000) Create stream\nI0505 22:09:14.958446 3375 log.go:172] (0xc000104e70) (0xc00098a000) Stream added, broadcasting: 3\nI0505 22:09:14.959124 3375 log.go:172] (0xc000104e70) Reply frame received for 3\nI0505 22:09:14.959145 3375 log.go:172] (0xc000104e70) (0xc0005b3e00) Create stream\nI0505 22:09:14.959152 3375 log.go:172] (0xc000104e70) (0xc0005b3e00) Stream added, broadcasting: 5\nI0505 22:09:14.959813 3375 log.go:172] (0xc000104e70) Reply frame received for 5\nI0505 22:09:15.020743 3375 log.go:172] (0xc000104e70) Data frame received for 5\nI0505 22:09:15.020793 3375 log.go:172] (0xc0005b3e00) (5) Data frame handling\nI0505 22:09:15.020814 3375 log.go:172] (0xc0005b3e00) (5) Data frame sent\nI0505 22:09:15.020829 3375 log.go:172] (0xc000104e70) Data frame received for 5\nI0505 22:09:15.020840 3375 log.go:172] (0xc0005b3e00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 22:09:15.020874 3375 log.go:172] (0xc000104e70) Data frame received for 3\nI0505 22:09:15.020899 3375 log.go:172] (0xc00098a000) (3) Data frame handling\nI0505 22:09:15.020912 3375 log.go:172] (0xc00098a000) (3) Data frame sent\nI0505 22:09:15.020921 3375 log.go:172] (0xc000104e70) Data frame received for 3\nI0505 22:09:15.020928 3375 log.go:172] (0xc00098a000) (3) Data frame handling\nI0505 22:09:15.022305 3375 log.go:172] (0xc000104e70) Data frame received for 1\nI0505 22:09:15.022322 3375 log.go:172] (0xc0005b3d60) (1) Data frame handling\nI0505 22:09:15.022353 3375 log.go:172] (0xc0005b3d60) (1) Data frame sent\nI0505 22:09:15.022372 3375 log.go:172] (0xc000104e70) (0xc0005b3d60) Stream removed, broadcasting: 1\nI0505 22:09:15.022458 3375 log.go:172] (0xc000104e70) Go away received\nI0505 22:09:15.022597 3375 log.go:172] (0xc000104e70) (0xc0005b3d60) Stream removed, broadcasting: 1\nI0505 22:09:15.022610 3375 log.go:172] (0xc000104e70) (0xc00098a000) Stream removed, broadcasting: 3\nI0505 22:09:15.022619 3375 log.go:172] (0xc000104e70) (0xc0005b3e00) Stream removed, broadcasting: 5\n" May 5 22:09:15.026: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 22:09:15.026: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 22:09:15.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:15.245: INFO: stderr: "I0505 22:09:15.145369 3395 log.go:172] (0xc000b68580) (0xc000a8a280) Create stream\nI0505 22:09:15.145420 3395 log.go:172] (0xc000b68580) (0xc000a8a280) Stream added, broadcasting: 1\nI0505 22:09:15.149889 3395 log.go:172] (0xc000b68580) Reply frame received for 1\nI0505 22:09:15.149930 3395 log.go:172] (0xc000b68580) (0xc00057a6e0) Create stream\nI0505 22:09:15.149941 3395 log.go:172] (0xc000b68580) (0xc00057a6e0) Stream added, broadcasting: 3\nI0505 22:09:15.150868 3395 log.go:172] (0xc000b68580) Reply frame received for 3\nI0505 22:09:15.150895 3395 log.go:172] (0xc000b68580) (0xc0007954a0) Create stream\nI0505 22:09:15.150902 3395 log.go:172] (0xc000b68580) (0xc0007954a0) Stream added, broadcasting: 5\nI0505 22:09:15.151750 3395 log.go:172] (0xc000b68580) Reply frame received for 5\nI0505 22:09:15.221626 3395 log.go:172] (0xc000b68580) Data frame received for 5\nI0505 22:09:15.221652 3395 log.go:172] (0xc0007954a0) (5) Data frame handling\nI0505 22:09:15.221668 3395 log.go:172] (0xc0007954a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0505 22:09:15.237969 3395 log.go:172] (0xc000b68580) Data frame received for 5\nI0505 22:09:15.237988 3395 log.go:172] (0xc0007954a0) (5) Data frame handling\nI0505 22:09:15.237997 3395 log.go:172] (0xc0007954a0) (5) Data frame sent\nI0505 22:09:15.238006 3395 log.go:172] (0xc000b68580) Data frame received for 5\nI0505 22:09:15.238010 3395 log.go:172] (0xc0007954a0) (5) Data frame handling\nI0505 22:09:15.238020 3395 log.go:172] (0xc000b68580) Data frame received for 3\nI0505 22:09:15.238024 3395 log.go:172] (0xc00057a6e0) (3) Data frame handling\nI0505 22:09:15.238032 3395 log.go:172] (0xc00057a6e0) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 22:09:15.238057 3395 log.go:172] (0xc0007954a0) (5) Data frame sent\nI0505 22:09:15.238997 3395 log.go:172] (0xc000b68580) Data frame received for 3\nI0505 22:09:15.239009 3395 log.go:172] (0xc00057a6e0) (3) Data frame handling\nI0505 22:09:15.239036 3395 log.go:172] (0xc000b68580) Data frame received for 5\nI0505 22:09:15.239069 3395 log.go:172] (0xc0007954a0) (5) Data frame handling\nI0505 22:09:15.240772 3395 log.go:172] (0xc000b68580) Data frame received for 1\nI0505 22:09:15.240794 3395 log.go:172] (0xc000a8a280) (1) Data frame handling\nI0505 22:09:15.240812 3395 log.go:172] (0xc000a8a280) (1) Data frame sent\nI0505 22:09:15.240828 3395 log.go:172] (0xc000b68580) (0xc000a8a280) Stream removed, broadcasting: 1\nI0505 22:09:15.241008 3395 log.go:172] (0xc000b68580) Go away received\nI0505 22:09:15.241228 3395 log.go:172] (0xc000b68580) (0xc000a8a280) Stream removed, broadcasting: 1\nI0505 22:09:15.241254 3395 log.go:172] (0xc000b68580) (0xc00057a6e0) Stream removed, broadcasting: 3\nI0505 22:09:15.241264 3395 log.go:172] (0xc000b68580) (0xc0007954a0) Stream removed, broadcasting: 5\n" May 5 22:09:15.245: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 22:09:15.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 22:09:15.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:15.439: INFO: stderr: "I0505 22:09:15.377803 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe000) Create stream\nI0505 22:09:15.377859 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe000) Stream added, broadcasting: 1\nI0505 22:09:15.380311 3415 log.go:172] (0xc0000f4dc0) Reply frame received for 1\nI0505 22:09:15.380346 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe0a0) Create stream\nI0505 22:09:15.380356 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe0a0) Stream added, broadcasting: 3\nI0505 22:09:15.381465 3415 log.go:172] (0xc0000f4dc0) Reply frame received for 3\nI0505 22:09:15.381507 3415 log.go:172] (0xc0000f4dc0) (0xc000b98000) Create stream\nI0505 22:09:15.381521 3415 log.go:172] (0xc0000f4dc0) (0xc000b98000) Stream added, broadcasting: 5\nI0505 22:09:15.382350 3415 log.go:172] (0xc0000f4dc0) Reply frame received for 5\nI0505 22:09:15.435043 3415 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0505 22:09:15.435075 3415 log.go:172] (0xc000b98000) (5) Data frame handling\nI0505 22:09:15.435082 3415 log.go:172] (0xc000b98000) (5) Data frame sent\nI0505 22:09:15.435088 3415 log.go:172] (0xc0000f4dc0) Data frame received for 5\nI0505 22:09:15.435092 3415 log.go:172] (0xc000b98000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0505 22:09:15.435109 3415 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0505 22:09:15.435114 3415 log.go:172] (0xc0009fe0a0) (3) Data frame handling\nI0505 22:09:15.435119 3415 log.go:172] (0xc0009fe0a0) (3) Data frame sent\nI0505 22:09:15.435124 3415 log.go:172] (0xc0000f4dc0) Data frame received for 3\nI0505 22:09:15.435128 3415 log.go:172] (0xc0009fe0a0) (3) Data frame handling\nI0505 22:09:15.436241 3415 log.go:172] (0xc0000f4dc0) Data frame received for 1\nI0505 22:09:15.436320 3415 log.go:172] (0xc0009fe000) (1) Data frame handling\nI0505 22:09:15.436340 3415 log.go:172] (0xc0009fe000) (1) Data frame sent\nI0505 22:09:15.436352 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe000) Stream removed, broadcasting: 1\nI0505 22:09:15.436364 3415 log.go:172] (0xc0000f4dc0) Go away received\nI0505 22:09:15.436678 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe000) Stream removed, broadcasting: 1\nI0505 22:09:15.436697 3415 log.go:172] (0xc0000f4dc0) (0xc0009fe0a0) Stream removed, broadcasting: 3\nI0505 22:09:15.436709 3415 log.go:172] (0xc0000f4dc0) (0xc000b98000) Stream removed, broadcasting: 5\n" May 5 22:09:15.439: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 5 22:09:15.439: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 5 22:09:15.443: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 5 22:09:15.443: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 5 22:09:15.443: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 5 22:09:15.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 22:09:15.660: INFO: stderr: "I0505 22:09:15.584079 3436 log.go:172] (0xc0000f5340) (0xc0006201e0) Create stream\nI0505 22:09:15.584143 3436 log.go:172] (0xc0000f5340) (0xc0006201e0) Stream added, broadcasting: 1\nI0505 22:09:15.587072 3436 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0505 22:09:15.587114 3436 log.go:172] (0xc0000f5340) (0xc0006e99a0) Create stream\nI0505 22:09:15.587125 3436 log.go:172] (0xc0000f5340) (0xc0006e99a0) Stream added, broadcasting: 3\nI0505 22:09:15.588105 3436 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0505 22:09:15.588139 3436 log.go:172] (0xc0000f5340) (0xc00070d360) Create stream\nI0505 22:09:15.588149 3436 log.go:172] (0xc0000f5340) (0xc00070d360) Stream added, broadcasting: 5\nI0505 22:09:15.589369 3436 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0505 22:09:15.653946 3436 log.go:172] (0xc0000f5340) Data frame received for 5\nI0505 22:09:15.653969 3436 log.go:172] (0xc00070d360) (5) Data frame handling\nI0505 22:09:15.653977 3436 log.go:172] (0xc00070d360) (5) Data frame sent\nI0505 22:09:15.653983 3436 log.go:172] (0xc0000f5340) Data frame received for 5\nI0505 22:09:15.653989 3436 log.go:172] (0xc00070d360) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 22:09:15.654006 3436 log.go:172] (0xc0000f5340) Data frame received for 3\nI0505 22:09:15.654011 3436 log.go:172] (0xc0006e99a0) (3) Data frame handling\nI0505 22:09:15.654018 3436 log.go:172] (0xc0006e99a0) (3) Data frame sent\nI0505 22:09:15.654023 3436 log.go:172] (0xc0000f5340) Data frame received for 3\nI0505 22:09:15.654027 3436 log.go:172] (0xc0006e99a0) (3) Data frame handling\nI0505 22:09:15.655525 3436 log.go:172] (0xc0000f5340) Data frame received for 1\nI0505 22:09:15.655542 3436 log.go:172] (0xc0006201e0) (1) Data frame handling\nI0505 22:09:15.655551 3436 log.go:172] (0xc0006201e0) (1) Data frame sent\nI0505 22:09:15.655561 3436 log.go:172] (0xc0000f5340) (0xc0006201e0) Stream removed, broadcasting: 1\nI0505 22:09:15.655578 3436 log.go:172] (0xc0000f5340) Go away received\nI0505 22:09:15.655836 3436 log.go:172] (0xc0000f5340) (0xc0006201e0) Stream removed, broadcasting: 1\nI0505 22:09:15.655850 3436 log.go:172] (0xc0000f5340) (0xc0006e99a0) Stream removed, broadcasting: 3\nI0505 22:09:15.655855 3436 log.go:172] (0xc0000f5340) (0xc00070d360) Stream removed, broadcasting: 5\n" May 5 22:09:15.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 22:09:15.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 22:09:15.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 22:09:15.916: INFO: stderr: "I0505 22:09:15.796074 3457 log.go:172] (0xc0009c6000) (0xc000976000) Create stream\nI0505 22:09:15.796164 3457 log.go:172] (0xc0009c6000) (0xc000976000) Stream added, broadcasting: 1\nI0505 22:09:15.809439 3457 log.go:172] (0xc0009c6000) Reply frame received for 1\nI0505 22:09:15.809472 3457 log.go:172] (0xc0009c6000) (0xc0009560a0) Create stream\nI0505 22:09:15.809479 3457 log.go:172] (0xc0009c6000) (0xc0009560a0) Stream added, broadcasting: 3\nI0505 22:09:15.810236 3457 log.go:172] (0xc0009c6000) Reply frame received for 3\nI0505 22:09:15.810259 3457 log.go:172] (0xc0009c6000) (0xc000956140) Create stream\nI0505 22:09:15.810266 3457 log.go:172] (0xc0009c6000) (0xc000956140) Stream added, broadcasting: 5\nI0505 22:09:15.810911 3457 log.go:172] (0xc0009c6000) Reply frame received for 5\nI0505 22:09:15.871770 3457 log.go:172] (0xc0009c6000) Data frame received for 5\nI0505 22:09:15.871802 3457 log.go:172] (0xc000956140) (5) Data frame handling\nI0505 22:09:15.871824 3457 log.go:172] (0xc000956140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 22:09:15.907565 3457 log.go:172] (0xc0009c6000) Data frame received for 3\nI0505 22:09:15.907612 3457 log.go:172] (0xc0009560a0) (3) Data frame handling\nI0505 22:09:15.907653 3457 log.go:172] (0xc0009560a0) (3) Data frame sent\nI0505 22:09:15.907809 3457 log.go:172] (0xc0009c6000) Data frame received for 3\nI0505 22:09:15.907852 3457 log.go:172] (0xc0009560a0) (3) Data frame handling\nI0505 22:09:15.907884 3457 log.go:172] (0xc0009c6000) Data frame received for 5\nI0505 22:09:15.907899 3457 log.go:172] (0xc000956140) (5) Data frame handling\nI0505 22:09:15.910060 3457 log.go:172] (0xc0009c6000) Data frame received for 1\nI0505 22:09:15.910094 3457 log.go:172] (0xc000976000) (1) Data frame handling\nI0505 22:09:15.910124 3457 log.go:172] (0xc000976000) (1) Data frame sent\nI0505 22:09:15.910195 3457 log.go:172] (0xc0009c6000) (0xc000976000) Stream removed, broadcasting: 1\nI0505 22:09:15.910240 3457 log.go:172] (0xc0009c6000) Go away received\nI0505 22:09:15.910626 3457 log.go:172] (0xc0009c6000) (0xc000976000) Stream removed, broadcasting: 1\nI0505 22:09:15.910646 3457 log.go:172] (0xc0009c6000) (0xc0009560a0) Stream removed, broadcasting: 3\nI0505 22:09:15.910655 3457 log.go:172] (0xc0009c6000) (0xc000956140) Stream removed, broadcasting: 5\n" May 5 22:09:15.916: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 22:09:15.916: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 22:09:15.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 5 22:09:16.156: INFO: stderr: "I0505 22:09:16.050636 3477 log.go:172] (0xc0009f6000) (0xc00068c640) Create stream\nI0505 22:09:16.050704 3477 log.go:172] (0xc0009f6000) (0xc00068c640) Stream added, broadcasting: 1\nI0505 22:09:16.053727 3477 log.go:172] (0xc0009f6000) Reply frame received for 1\nI0505 22:09:16.053780 3477 log.go:172] (0xc0009f6000) (0xc000221400) Create stream\nI0505 22:09:16.053793 3477 log.go:172] (0xc0009f6000) (0xc000221400) Stream added, broadcasting: 3\nI0505 22:09:16.054778 3477 log.go:172] (0xc0009f6000) Reply frame received for 3\nI0505 22:09:16.054802 3477 log.go:172] (0xc0009f6000) (0xc0008ec000) Create stream\nI0505 22:09:16.054809 3477 log.go:172] (0xc0009f6000) (0xc0008ec000) Stream added, broadcasting: 5\nI0505 22:09:16.055703 3477 log.go:172] (0xc0009f6000) Reply frame received for 5\nI0505 22:09:16.115899 3477 log.go:172] (0xc0009f6000) Data frame received for 5\nI0505 22:09:16.115929 3477 log.go:172] (0xc0008ec000) (5) Data frame handling\nI0505 22:09:16.115950 3477 log.go:172] (0xc0008ec000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0505 22:09:16.147437 3477 log.go:172] (0xc0009f6000) Data frame received for 3\nI0505 22:09:16.147466 3477 log.go:172] (0xc000221400) (3) Data frame handling\nI0505 22:09:16.147489 3477 log.go:172] (0xc000221400) (3) Data frame sent\nI0505 22:09:16.148338 3477 log.go:172] (0xc0009f6000) Data frame received for 3\nI0505 22:09:16.148365 3477 log.go:172] (0xc000221400) (3) Data frame handling\nI0505 22:09:16.148406 3477 log.go:172] (0xc0009f6000) Data frame received for 5\nI0505 22:09:16.148448 3477 log.go:172] (0xc0008ec000) (5) Data frame handling\nI0505 22:09:16.150673 3477 log.go:172] (0xc0009f6000) Data frame received for 1\nI0505 22:09:16.150700 3477 log.go:172] (0xc00068c640) (1) Data frame handling\nI0505 22:09:16.150722 3477 log.go:172] (0xc00068c640) (1) Data frame sent\nI0505 22:09:16.150742 3477 log.go:172] (0xc0009f6000) (0xc00068c640) Stream removed, broadcasting: 1\nI0505 22:09:16.150761 3477 log.go:172] (0xc0009f6000) Go away received\nI0505 22:09:16.151258 3477 log.go:172] (0xc0009f6000) (0xc00068c640) Stream removed, broadcasting: 1\nI0505 22:09:16.151277 3477 log.go:172] (0xc0009f6000) (0xc000221400) Stream removed, broadcasting: 3\nI0505 22:09:16.151286 3477 log.go:172] (0xc0009f6000) (0xc0008ec000) Stream removed, broadcasting: 5\n" May 5 22:09:16.156: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 5 22:09:16.156: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 5 22:09:16.156: INFO: Waiting for statefulset status.replicas updated to 0 May 5 22:09:16.174: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 5 22:09:26.183: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 5 22:09:26.183: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 5 22:09:26.183: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 5 22:09:26.194: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:26.194: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:26.194: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:26.194: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:26.194: INFO: May 5 22:09:26.194: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 22:09:27.491: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:27.491: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:27.491: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:27.491: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:27.491: INFO: May 5 22:09:27.491: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 22:09:28.496: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:28.497: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:28.497: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:28.497: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:28.497: INFO: May 5 22:09:28.497: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 22:09:29.503: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:29.503: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:29.503: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:29.503: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:29.503: INFO: May 5 22:09:29.503: INFO: StatefulSet ss has not reached scale 0, at 3 May 5 22:09:30.515: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:30.515: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:30.515: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:30.515: INFO: May 5 22:09:30.515: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 22:09:31.519: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:31.519: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:31.520: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:31.520: INFO: May 5 22:09:31.520: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 22:09:32.525: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:32.526: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:32.526: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:32.526: INFO: May 5 22:09:32.526: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 22:09:33.530: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:33.530: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:33.530: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:33.530: INFO: May 5 22:09:33.530: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 22:09:34.535: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:34.535: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:34.535: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:34.535: INFO: May 5 22:09:34.535: INFO: StatefulSet ss has not reached scale 0, at 2 May 5 22:09:35.540: INFO: POD NODE PHASE GRACE CONDITIONS May 5 22:09:35.540: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:16 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:08:44 +0000 UTC }] May 5 22:09:35.540: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-05 22:09:04 +0000 UTC }] May 5 22:09:35.540: INFO: May 5 22:09:35.540: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-983 May 5 22:09:36.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:36.680: INFO: rc: 1 May 5 22:09:36.680: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 5 22:09:46.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:46.783: INFO: rc: 1 May 5 22:09:46.783: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:09:56.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:09:56.895: INFO: rc: 1 May 5 22:09:56.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:06.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:07.001: INFO: rc: 1 May 5 22:10:07.001: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:17.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:17.105: INFO: rc: 1 May 5 22:10:17.105: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:27.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:27.203: INFO: rc: 1 May 5 22:10:27.203: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:37.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:37.294: INFO: rc: 1 May 5 22:10:37.294: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:47.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:47.390: INFO: rc: 1 May 5 22:10:47.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:10:57.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:10:57.496: INFO: rc: 1 May 5 22:10:57.496: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:07.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:07.615: INFO: rc: 1 May 5 22:11:07.615: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:17.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:17.710: INFO: rc: 1 May 5 22:11:17.710: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:27.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:27.803: INFO: rc: 1 May 5 22:11:27.803: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:37.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:37.909: INFO: rc: 1 May 5 22:11:37.909: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:47.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:48.006: INFO: rc: 1 May 5 22:11:48.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:11:58.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:11:58.109: INFO: rc: 1 May 5 22:11:58.109: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:08.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:08.210: INFO: rc: 1 May 5 22:12:08.210: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:18.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:18.310: INFO: rc: 1 May 5 22:12:18.310: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:28.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:28.406: INFO: rc: 1 May 5 22:12:28.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:38.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:38.518: INFO: rc: 1 May 5 22:12:38.518: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:48.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:48.653: INFO: rc: 1 May 5 22:12:48.653: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:12:58.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:12:58.760: INFO: rc: 1 May 5 22:12:58.760: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:13:08.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:13:08.861: INFO: rc: 1 May 5 22:13:08.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:13:18.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:13:18.975: INFO: rc: 1 May 5 22:13:18.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:13:28.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:13:29.089: INFO: rc: 1 May 5 22:13:29.090: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:13:39.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:13:39.214: INFO: rc: 1 May 5 22:13:39.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:13:49.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:13:52.170: INFO: rc: 1 May 5 22:13:52.170: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:14:02.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:14:02.265: INFO: rc: 1 May 5 22:14:02.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:14:12.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:14:12.362: INFO: rc: 1 May 5 22:14:12.362: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:14:22.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:14:22.469: INFO: rc: 1 May 5 22:14:22.469: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:14:32.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:14:32.571: INFO: rc: 1 May 5 22:14:32.571: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 5 22:14:42.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-983 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 5 22:14:42.684: INFO: rc: 1 May 5 22:14:42.684: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 5 22:14:42.684: INFO: Scaling statefulset ss to 0 May 5 22:14:42.693: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 5 22:14:42.695: INFO: Deleting all statefulset in ns statefulset-983 May 5 22:14:42.698: INFO: Scaling statefulset ss to 0 May 5 22:14:42.705: INFO: Waiting for statefulset status.replicas updated to 0 May 5 22:14:42.707: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:14:42.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-983" for this suite. • [SLOW TEST:358.906 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":223,"skipped":3529,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:14:42.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8686.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8686.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8686.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8686.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8686.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8686.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:14:50.867: INFO: DNS probes using dns-8686/dns-test-75a9ba39-c292-4f36-a34c-6f6526ca9eda succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:14:50.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8686" for this suite. • [SLOW TEST:8.223 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":224,"skipped":3530,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:14:50.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:15:00.551: INFO: DNS probes using dns-test-c2813667-27a0-44eb-835f-0f0d1d96e0d4 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:15:06.649: INFO: File wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:06.652: INFO: File jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:06.652: INFO: Lookups using dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 failed for: [wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local] May 5 22:15:11.657: INFO: File wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:11.660: INFO: File jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:11.660: INFO: Lookups using dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 failed for: [wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local] May 5 22:15:16.657: INFO: File wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:16.660: INFO: File jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:16.660: INFO: Lookups using dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 failed for: [wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local] May 5 22:15:21.656: INFO: File wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:21.659: INFO: File jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:21.659: INFO: Lookups using dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 failed for: [wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local] May 5 22:15:26.658: INFO: File wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:26.662: INFO: File jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local from pod dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 contains 'foo.example.com. ' instead of 'bar.example.com.' May 5 22:15:26.662: INFO: Lookups using dns-4205/dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 failed for: [wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local] May 5 22:15:31.714: INFO: DNS probes using dns-test-3494b797-d301-4e59-81f9-9099b0e8e874 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4205.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4205.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:15:40.392: INFO: DNS probes using dns-test-2bbe8612-b9fa-4658-a3dd-3e0d60119720 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:15:40.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4205" for this suite. • [SLOW TEST:49.504 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":225,"skipped":3530,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:15:40.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 5 22:15:40.923: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692842 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 5 22:15:40.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692844 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 5 22:15:40.923: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692845 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 5 22:15:51.038: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692921 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 5 22:15:51.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692922 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 5 22:15:51.039: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8258 /api/v1/namespaces/watch-8258/configmaps/e2e-watch-test-label-changed 9c8ac81c-f8c4-4e66-963b-c2abecfa6b11 13692923 0 2020-05-05 22:15:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:15:51.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8258" for this suite. • [SLOW TEST:10.557 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":226,"skipped":3561,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:15:51.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:15:51.086: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8476 I0505 22:15:51.109986 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8476, replica count: 1 I0505 22:15:52.160390 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:15:53.160648 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:15:54.160858 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:15:55.161050 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:15:56.161420 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 22:15:56.301: INFO: Created: latency-svc-2bkmg May 5 22:15:56.310: INFO: Got endpoints: latency-svc-2bkmg [49.26955ms] May 5 22:15:56.401: INFO: Created: latency-svc-mxxkh May 5 22:15:56.406: INFO: Got endpoints: latency-svc-mxxkh [94.957003ms] May 5 22:15:56.463: INFO: Created: latency-svc-9gvlz May 5 22:15:56.483: INFO: Got endpoints: latency-svc-9gvlz [171.657997ms] May 5 22:15:56.539: INFO: Created: latency-svc-2zxdb May 5 22:15:56.543: INFO: Got endpoints: latency-svc-2zxdb [232.134245ms] May 5 22:15:56.579: INFO: Created: latency-svc-2qn28 May 5 22:15:56.588: INFO: Got endpoints: latency-svc-2qn28 [277.41586ms] May 5 22:15:56.628: INFO: Created: latency-svc-4dv6l May 5 22:15:56.682: INFO: Got endpoints: latency-svc-4dv6l [371.544065ms] May 5 22:15:56.706: INFO: Created: latency-svc-2sknc May 5 22:15:56.720: INFO: Got endpoints: latency-svc-2sknc [409.380929ms] May 5 22:15:56.748: INFO: Created: latency-svc-j4fj2 May 5 22:15:56.763: INFO: Got endpoints: latency-svc-j4fj2 [451.528707ms] May 5 22:15:56.833: INFO: Created: latency-svc-lsn7d May 5 22:15:56.836: INFO: Got endpoints: latency-svc-lsn7d [525.250859ms] May 5 22:15:56.904: INFO: Created: latency-svc-j269c May 5 22:15:56.929: INFO: Got endpoints: latency-svc-j269c [618.324842ms] May 5 22:15:57.060: INFO: Created: latency-svc-lf7hh May 5 22:15:57.074: INFO: Got endpoints: latency-svc-lf7hh [763.017321ms] May 5 22:15:57.139: INFO: Created: latency-svc-ggqfc May 5 22:15:57.222: INFO: Got endpoints: latency-svc-ggqfc [910.841544ms] May 5 22:15:57.246: INFO: Created: latency-svc-bk7zw May 5 22:15:57.266: INFO: Got endpoints: latency-svc-bk7zw [954.692956ms] May 5 22:15:57.292: INFO: Created: latency-svc-rm7rf May 5 22:15:57.308: INFO: Got endpoints: latency-svc-rm7rf [997.38036ms] May 5 22:15:57.372: INFO: Created: latency-svc-m75fm May 5 22:15:57.386: INFO: Got endpoints: latency-svc-m75fm [1.075271115s] May 5 22:15:57.414: INFO: Created: latency-svc-b7rbl May 5 22:15:57.429: INFO: Got endpoints: latency-svc-b7rbl [1.117681642s] May 5 22:15:57.454: INFO: Created: latency-svc-zn6xm May 5 22:15:57.520: INFO: Got endpoints: latency-svc-zn6xm [1.114487639s] May 5 22:15:57.564: INFO: Created: latency-svc-jjcpt May 5 22:15:57.579: INFO: Got endpoints: latency-svc-jjcpt [1.096599898s] May 5 22:15:57.688: INFO: Created: latency-svc-2zshz May 5 22:15:57.725: INFO: Got endpoints: latency-svc-2zshz [1.18200658s] May 5 22:15:57.766: INFO: Created: latency-svc-nq24t May 5 22:15:57.774: INFO: Got endpoints: latency-svc-nq24t [1.186087669s] May 5 22:15:57.826: INFO: Created: latency-svc-2db7j May 5 22:15:57.835: INFO: Got endpoints: latency-svc-2db7j [1.152303169s] May 5 22:15:57.882: INFO: Created: latency-svc-fnhd9 May 5 22:15:57.981: INFO: Got endpoints: latency-svc-fnhd9 [1.261097717s] May 5 22:15:58.012: INFO: Created: latency-svc-wl9gf May 5 22:15:58.057: INFO: Got endpoints: latency-svc-wl9gf [1.294918659s] May 5 22:15:58.138: INFO: Created: latency-svc-4766b May 5 22:15:58.143: INFO: Got endpoints: latency-svc-4766b [1.30687694s] May 5 22:15:58.182: INFO: Created: latency-svc-f9gg9 May 5 22:15:58.215: INFO: Got endpoints: latency-svc-f9gg9 [1.286055676s] May 5 22:15:58.287: INFO: Created: latency-svc-jgcb2 May 5 22:15:58.316: INFO: Got endpoints: latency-svc-jgcb2 [1.242479344s] May 5 22:15:58.510: INFO: Created: latency-svc-c42rw May 5 22:15:58.514: INFO: Got endpoints: latency-svc-c42rw [1.292070185s] May 5 22:15:58.571: INFO: Created: latency-svc-9kqsh May 5 22:15:58.694: INFO: Got endpoints: latency-svc-9kqsh [1.428833716s] May 5 22:15:58.767: INFO: Created: latency-svc-r24g9 May 5 22:15:58.886: INFO: Got endpoints: latency-svc-r24g9 [1.577430085s] May 5 22:15:58.919: INFO: Created: latency-svc-ldpnj May 5 22:15:58.951: INFO: Got endpoints: latency-svc-ldpnj [1.564513552s] May 5 22:15:59.030: INFO: Created: latency-svc-9msh7 May 5 22:15:59.078: INFO: Got endpoints: latency-svc-9msh7 [1.649608045s] May 5 22:15:59.186: INFO: Created: latency-svc-88k9v May 5 22:15:59.191: INFO: Got endpoints: latency-svc-88k9v [1.671002735s] May 5 22:15:59.580: INFO: Created: latency-svc-kpsp7 May 5 22:15:59.724: INFO: Got endpoints: latency-svc-kpsp7 [2.145090367s] May 5 22:15:59.740: INFO: Created: latency-svc-84s5c May 5 22:15:59.898: INFO: Got endpoints: latency-svc-84s5c [2.173551155s] May 5 22:15:59.969: INFO: Created: latency-svc-qnhpq May 5 22:16:00.103: INFO: Got endpoints: latency-svc-qnhpq [2.328530562s] May 5 22:16:00.169: INFO: Created: latency-svc-vw4w9 May 5 22:16:00.181: INFO: Got endpoints: latency-svc-vw4w9 [2.346517836s] May 5 22:16:00.275: INFO: Created: latency-svc-bxzm5 May 5 22:16:00.297: INFO: Got endpoints: latency-svc-bxzm5 [2.315836627s] May 5 22:16:00.299: INFO: Created: latency-svc-fb9j2 May 5 22:16:00.333: INFO: Got endpoints: latency-svc-fb9j2 [2.275694888s] May 5 22:16:00.357: INFO: Created: latency-svc-m6nqv May 5 22:16:00.408: INFO: Got endpoints: latency-svc-m6nqv [2.265305691s] May 5 22:16:00.421: INFO: Created: latency-svc-g8g7l May 5 22:16:00.434: INFO: Got endpoints: latency-svc-g8g7l [2.219108045s] May 5 22:16:00.466: INFO: Created: latency-svc-4fcd7 May 5 22:16:00.483: INFO: Got endpoints: latency-svc-4fcd7 [2.166775537s] May 5 22:16:00.504: INFO: Created: latency-svc-tsqzj May 5 22:16:00.564: INFO: Got endpoints: latency-svc-tsqzj [2.049709151s] May 5 22:16:00.604: INFO: Created: latency-svc-mr759 May 5 22:16:00.622: INFO: Got endpoints: latency-svc-mr759 [1.927308101s] May 5 22:16:00.645: INFO: Created: latency-svc-jprg4 May 5 22:16:00.694: INFO: Got endpoints: latency-svc-jprg4 [1.8083067s] May 5 22:16:00.717: INFO: Created: latency-svc-nnsvw May 5 22:16:00.752: INFO: Got endpoints: latency-svc-nnsvw [1.800932514s] May 5 22:16:00.780: INFO: Created: latency-svc-ffj7p May 5 22:16:00.791: INFO: Got endpoints: latency-svc-ffj7p [1.712418102s] May 5 22:16:00.874: INFO: Created: latency-svc-dq7hk May 5 22:16:00.940: INFO: Got endpoints: latency-svc-dq7hk [1.749112618s] May 5 22:16:00.941: INFO: Created: latency-svc-hj9hc May 5 22:16:00.947: INFO: Got endpoints: latency-svc-hj9hc [1.222712276s] May 5 22:16:00.970: INFO: Created: latency-svc-tpxww May 5 22:16:01.011: INFO: Got endpoints: latency-svc-tpxww [1.112761705s] May 5 22:16:01.026: INFO: Created: latency-svc-dvm2s May 5 22:16:01.045: INFO: Got endpoints: latency-svc-dvm2s [942.060026ms] May 5 22:16:01.098: INFO: Created: latency-svc-lgdjm May 5 22:16:01.111: INFO: Got endpoints: latency-svc-lgdjm [929.599826ms] May 5 22:16:01.175: INFO: Created: latency-svc-qln59 May 5 22:16:01.196: INFO: Got endpoints: latency-svc-qln59 [898.358786ms] May 5 22:16:01.221: INFO: Created: latency-svc-rksdl May 5 22:16:01.238: INFO: Got endpoints: latency-svc-rksdl [904.881455ms] May 5 22:16:01.261: INFO: Created: latency-svc-jnhzr May 5 22:16:01.356: INFO: Created: latency-svc-wj66k May 5 22:16:01.356: INFO: Got endpoints: latency-svc-jnhzr [947.672308ms] May 5 22:16:01.364: INFO: Got endpoints: latency-svc-wj66k [929.908046ms] May 5 22:16:01.409: INFO: Created: latency-svc-xmgfw May 5 22:16:01.425: INFO: Got endpoints: latency-svc-xmgfw [942.118256ms] May 5 22:16:01.451: INFO: Created: latency-svc-mqf4n May 5 22:16:01.496: INFO: Got endpoints: latency-svc-mqf4n [932.482883ms] May 5 22:16:01.511: INFO: Created: latency-svc-r74tl May 5 22:16:01.529: INFO: Got endpoints: latency-svc-r74tl [907.183162ms] May 5 22:16:01.569: INFO: Created: latency-svc-87kln May 5 22:16:01.588: INFO: Got endpoints: latency-svc-87kln [893.916607ms] May 5 22:16:01.641: INFO: Created: latency-svc-99vj4 May 5 22:16:01.643: INFO: Got endpoints: latency-svc-99vj4 [891.001595ms] May 5 22:16:01.791: INFO: Created: latency-svc-fgbwl May 5 22:16:01.794: INFO: Got endpoints: latency-svc-fgbwl [1.003151345s] May 5 22:16:01.821: INFO: Created: latency-svc-8nzg6 May 5 22:16:01.835: INFO: Got endpoints: latency-svc-8nzg6 [894.988944ms] May 5 22:16:01.869: INFO: Created: latency-svc-tr65z May 5 22:16:01.878: INFO: Got endpoints: latency-svc-tr65z [930.615229ms] May 5 22:16:01.937: INFO: Created: latency-svc-5s4xq May 5 22:16:01.956: INFO: Got endpoints: latency-svc-5s4xq [944.910648ms] May 5 22:16:01.980: INFO: Created: latency-svc-cjvrp May 5 22:16:01.983: INFO: Got endpoints: latency-svc-cjvrp [937.728835ms] May 5 22:16:02.098: INFO: Created: latency-svc-kllcf May 5 22:16:02.119: INFO: Got endpoints: latency-svc-kllcf [1.008399581s] May 5 22:16:02.140: INFO: Created: latency-svc-sd975 May 5 22:16:02.150: INFO: Got endpoints: latency-svc-sd975 [953.838714ms] May 5 22:16:02.169: INFO: Created: latency-svc-hmc5d May 5 22:16:02.259: INFO: Got endpoints: latency-svc-hmc5d [1.020559187s] May 5 22:16:02.280: INFO: Created: latency-svc-m59z5 May 5 22:16:02.313: INFO: Got endpoints: latency-svc-m59z5 [956.35778ms] May 5 22:16:02.349: INFO: Created: latency-svc-8q9h7 May 5 22:16:02.395: INFO: Got endpoints: latency-svc-8q9h7 [1.030317178s] May 5 22:16:02.415: INFO: Created: latency-svc-k8m5r May 5 22:16:02.433: INFO: Got endpoints: latency-svc-k8m5r [1.008008783s] May 5 22:16:02.454: INFO: Created: latency-svc-w25nz May 5 22:16:02.470: INFO: Got endpoints: latency-svc-w25nz [973.35472ms] May 5 22:16:02.490: INFO: Created: latency-svc-7rhw6 May 5 22:16:02.539: INFO: Got endpoints: latency-svc-7rhw6 [1.009805038s] May 5 22:16:02.551: INFO: Created: latency-svc-w5bxc May 5 22:16:02.583: INFO: Got endpoints: latency-svc-w5bxc [994.495472ms] May 5 22:16:02.614: INFO: Created: latency-svc-4dxkd May 5 22:16:02.627: INFO: Got endpoints: latency-svc-4dxkd [983.739534ms] May 5 22:16:02.676: INFO: Created: latency-svc-rzvjx May 5 22:16:02.684: INFO: Got endpoints: latency-svc-rzvjx [890.025872ms] May 5 22:16:02.718: INFO: Created: latency-svc-9wd5r May 5 22:16:02.741: INFO: Got endpoints: latency-svc-9wd5r [906.071761ms] May 5 22:16:02.820: INFO: Created: latency-svc-8dmqt May 5 22:16:02.852: INFO: Created: latency-svc-s9qhw May 5 22:16:02.853: INFO: Got endpoints: latency-svc-8dmqt [975.2025ms] May 5 22:16:02.868: INFO: Got endpoints: latency-svc-s9qhw [911.967167ms] May 5 22:16:02.889: INFO: Created: latency-svc-b78f7 May 5 22:16:02.970: INFO: Got endpoints: latency-svc-b78f7 [987.26431ms] May 5 22:16:02.981: INFO: Created: latency-svc-wtk96 May 5 22:16:03.013: INFO: Got endpoints: latency-svc-wtk96 [893.767529ms] May 5 22:16:03.065: INFO: Created: latency-svc-bpcsj May 5 22:16:03.113: INFO: Got endpoints: latency-svc-bpcsj [963.718833ms] May 5 22:16:03.123: INFO: Created: latency-svc-6kr85 May 5 22:16:03.140: INFO: Got endpoints: latency-svc-6kr85 [881.136389ms] May 5 22:16:03.159: INFO: Created: latency-svc-k96tp May 5 22:16:03.170: INFO: Got endpoints: latency-svc-k96tp [857.716128ms] May 5 22:16:03.207: INFO: Created: latency-svc-ghs8q May 5 22:16:03.251: INFO: Got endpoints: latency-svc-ghs8q [855.855622ms] May 5 22:16:03.263: INFO: Created: latency-svc-l69kn May 5 22:16:03.279: INFO: Got endpoints: latency-svc-l69kn [845.418377ms] May 5 22:16:03.299: INFO: Created: latency-svc-m44bc May 5 22:16:03.315: INFO: Got endpoints: latency-svc-m44bc [845.568124ms] May 5 22:16:03.335: INFO: Created: latency-svc-wj4vn May 5 22:16:03.406: INFO: Got endpoints: latency-svc-wj4vn [867.358267ms] May 5 22:16:03.435: INFO: Created: latency-svc-gv2jf May 5 22:16:03.448: INFO: Got endpoints: latency-svc-gv2jf [865.154312ms] May 5 22:16:03.471: INFO: Created: latency-svc-vn67p May 5 22:16:03.514: INFO: Got endpoints: latency-svc-vn67p [887.641469ms] May 5 22:16:03.527: INFO: Created: latency-svc-6bjmr May 5 22:16:03.545: INFO: Got endpoints: latency-svc-6bjmr [861.043621ms] May 5 22:16:03.569: INFO: Created: latency-svc-dg5s2 May 5 22:16:03.588: INFO: Got endpoints: latency-svc-dg5s2 [846.191696ms] May 5 22:16:03.611: INFO: Created: latency-svc-fnj6h May 5 22:16:03.670: INFO: Got endpoints: latency-svc-fnj6h [816.849166ms] May 5 22:16:03.694: INFO: Created: latency-svc-ksrnf May 5 22:16:03.704: INFO: Got endpoints: latency-svc-ksrnf [835.262075ms] May 5 22:16:03.734: INFO: Created: latency-svc-mvnpm May 5 22:16:03.756: INFO: Got endpoints: latency-svc-mvnpm [786.425989ms] May 5 22:16:03.850: INFO: Created: latency-svc-m6chl May 5 22:16:03.859: INFO: Got endpoints: latency-svc-m6chl [845.520933ms] May 5 22:16:03.879: INFO: Created: latency-svc-pgcxc May 5 22:16:03.895: INFO: Got endpoints: latency-svc-pgcxc [781.607556ms] May 5 22:16:03.926: INFO: Created: latency-svc-p7c62 May 5 22:16:03.943: INFO: Got endpoints: latency-svc-p7c62 [803.347735ms] May 5 22:16:04.007: INFO: Created: latency-svc-56gwn May 5 22:16:04.022: INFO: Got endpoints: latency-svc-56gwn [851.412787ms] May 5 22:16:04.073: INFO: Created: latency-svc-dhbdx May 5 22:16:04.138: INFO: Got endpoints: latency-svc-dhbdx [887.496374ms] May 5 22:16:04.172: INFO: Created: latency-svc-5zk9k May 5 22:16:04.190: INFO: Got endpoints: latency-svc-5zk9k [911.624649ms] May 5 22:16:04.287: INFO: Created: latency-svc-rjtrl May 5 22:16:04.293: INFO: Got endpoints: latency-svc-rjtrl [977.327349ms] May 5 22:16:04.358: INFO: Created: latency-svc-fksx9 May 5 22:16:04.383: INFO: Got endpoints: latency-svc-fksx9 [977.056773ms] May 5 22:16:04.450: INFO: Created: latency-svc-92msc May 5 22:16:04.456: INFO: Got endpoints: latency-svc-92msc [1.007617414s] May 5 22:16:04.480: INFO: Created: latency-svc-sg4b4 May 5 22:16:04.511: INFO: Got endpoints: latency-svc-sg4b4 [996.288034ms] May 5 22:16:04.547: INFO: Created: latency-svc-vwgkm May 5 22:16:04.604: INFO: Got endpoints: latency-svc-vwgkm [1.05913696s] May 5 22:16:04.619: INFO: Created: latency-svc-v5dhz May 5 22:16:04.637: INFO: Got endpoints: latency-svc-v5dhz [1.049005053s] May 5 22:16:04.658: INFO: Created: latency-svc-j99mr May 5 22:16:04.677: INFO: Got endpoints: latency-svc-j99mr [1.007142902s] May 5 22:16:04.754: INFO: Created: latency-svc-h8ztg May 5 22:16:04.763: INFO: Got endpoints: latency-svc-h8ztg [1.059598148s] May 5 22:16:04.784: INFO: Created: latency-svc-98dgd May 5 22:16:04.794: INFO: Got endpoints: latency-svc-98dgd [1.03744459s] May 5 22:16:04.817: INFO: Created: latency-svc-b4kgv May 5 22:16:04.836: INFO: Got endpoints: latency-svc-b4kgv [977.523568ms] May 5 22:16:04.892: INFO: Created: latency-svc-9z8rx May 5 22:16:04.902: INFO: Got endpoints: latency-svc-9z8rx [1.006981275s] May 5 22:16:04.958: INFO: Created: latency-svc-x6sfd May 5 22:16:04.982: INFO: Got endpoints: latency-svc-x6sfd [1.038214529s] May 5 22:16:05.053: INFO: Created: latency-svc-jrtz9 May 5 22:16:05.056: INFO: Got endpoints: latency-svc-jrtz9 [1.034266774s] May 5 22:16:05.104: INFO: Created: latency-svc-l5f6c May 5 22:16:05.126: INFO: Got endpoints: latency-svc-l5f6c [987.991696ms] May 5 22:16:05.153: INFO: Created: latency-svc-9kmh6 May 5 22:16:05.192: INFO: Got endpoints: latency-svc-9kmh6 [1.001154689s] May 5 22:16:05.217: INFO: Created: latency-svc-r8nbp May 5 22:16:05.235: INFO: Got endpoints: latency-svc-r8nbp [942.211396ms] May 5 22:16:05.258: INFO: Created: latency-svc-qxgfq May 5 22:16:05.271: INFO: Got endpoints: latency-svc-qxgfq [887.702771ms] May 5 22:16:05.329: INFO: Created: latency-svc-jzqtt May 5 22:16:05.356: INFO: Got endpoints: latency-svc-jzqtt [900.150707ms] May 5 22:16:05.404: INFO: Created: latency-svc-6lgkh May 5 22:16:05.422: INFO: Got endpoints: latency-svc-6lgkh [911.434991ms] May 5 22:16:05.504: INFO: Created: latency-svc-rszxr May 5 22:16:05.518: INFO: Got endpoints: latency-svc-rszxr [913.780506ms] May 5 22:16:05.558: INFO: Created: latency-svc-zrth4 May 5 22:16:05.573: INFO: Got endpoints: latency-svc-zrth4 [935.899373ms] May 5 22:16:05.634: INFO: Created: latency-svc-4mgbq May 5 22:16:05.639: INFO: Got endpoints: latency-svc-4mgbq [962.067669ms] May 5 22:16:05.674: INFO: Created: latency-svc-mknms May 5 22:16:05.694: INFO: Got endpoints: latency-svc-mknms [930.523488ms] May 5 22:16:05.726: INFO: Created: latency-svc-w2dxl May 5 22:16:05.772: INFO: Got endpoints: latency-svc-w2dxl [977.67681ms] May 5 22:16:05.785: INFO: Created: latency-svc-fv228 May 5 22:16:05.803: INFO: Got endpoints: latency-svc-fv228 [966.49717ms] May 5 22:16:05.828: INFO: Created: latency-svc-nm29j May 5 22:16:05.845: INFO: Got endpoints: latency-svc-nm29j [942.773089ms] May 5 22:16:05.914: INFO: Created: latency-svc-2v5dj May 5 22:16:05.926: INFO: Got endpoints: latency-svc-2v5dj [944.10693ms] May 5 22:16:05.984: INFO: Created: latency-svc-9s74f May 5 22:16:06.030: INFO: Got endpoints: latency-svc-9s74f [973.527218ms] May 5 22:16:06.061: INFO: Created: latency-svc-v5cwg May 5 22:16:06.080: INFO: Got endpoints: latency-svc-v5cwg [953.776716ms] May 5 22:16:06.103: INFO: Created: latency-svc-w5s6b May 5 22:16:06.123: INFO: Got endpoints: latency-svc-w5s6b [931.685665ms] May 5 22:16:06.189: INFO: Created: latency-svc-z767c May 5 22:16:06.200: INFO: Got endpoints: latency-svc-z767c [965.293294ms] May 5 22:16:06.244: INFO: Created: latency-svc-lm5k4 May 5 22:16:06.267: INFO: Got endpoints: latency-svc-lm5k4 [995.601301ms] May 5 22:16:06.311: INFO: Created: latency-svc-s4pm6 May 5 22:16:06.315: INFO: Got endpoints: latency-svc-s4pm6 [959.394193ms] May 5 22:16:06.343: INFO: Created: latency-svc-qknnh May 5 22:16:06.358: INFO: Got endpoints: latency-svc-qknnh [935.381973ms] May 5 22:16:06.394: INFO: Created: latency-svc-rz8ch May 5 22:16:06.448: INFO: Got endpoints: latency-svc-rz8ch [930.191056ms] May 5 22:16:06.461: INFO: Created: latency-svc-7j5fn May 5 22:16:06.473: INFO: Got endpoints: latency-svc-7j5fn [899.775723ms] May 5 22:16:06.502: INFO: Created: latency-svc-2kn8f May 5 22:16:06.516: INFO: Got endpoints: latency-svc-2kn8f [876.271216ms] May 5 22:16:06.599: INFO: Created: latency-svc-dhwwk May 5 22:16:06.602: INFO: Got endpoints: latency-svc-dhwwk [908.184734ms] May 5 22:16:06.643: INFO: Created: latency-svc-scxwq May 5 22:16:06.660: INFO: Got endpoints: latency-svc-scxwq [887.886886ms] May 5 22:16:06.748: INFO: Created: latency-svc-47b82 May 5 22:16:06.760: INFO: Got endpoints: latency-svc-47b82 [956.7653ms] May 5 22:16:06.790: INFO: Created: latency-svc-bq2xc May 5 22:16:06.823: INFO: Got endpoints: latency-svc-bq2xc [978.039016ms] May 5 22:16:06.910: INFO: Created: latency-svc-xmqqv May 5 22:16:06.940: INFO: Got endpoints: latency-svc-xmqqv [1.013737452s] May 5 22:16:06.970: INFO: Created: latency-svc-85qzb May 5 22:16:06.979: INFO: Got endpoints: latency-svc-85qzb [949.460101ms] May 5 22:16:07.048: INFO: Created: latency-svc-6gng8 May 5 22:16:07.076: INFO: Got endpoints: latency-svc-6gng8 [995.549435ms] May 5 22:16:07.108: INFO: Created: latency-svc-jxtz6 May 5 22:16:07.124: INFO: Got endpoints: latency-svc-jxtz6 [1.000115966s] May 5 22:16:07.191: INFO: Created: latency-svc-wstrb May 5 22:16:07.194: INFO: Got endpoints: latency-svc-wstrb [993.983743ms] May 5 22:16:07.228: INFO: Created: latency-svc-822lw May 5 22:16:07.239: INFO: Got endpoints: latency-svc-822lw [972.083167ms] May 5 22:16:07.270: INFO: Created: latency-svc-hv292 May 5 22:16:07.281: INFO: Got endpoints: latency-svc-hv292 [966.094223ms] May 5 22:16:07.335: INFO: Created: latency-svc-shrvz May 5 22:16:07.342: INFO: Got endpoints: latency-svc-shrvz [983.75188ms] May 5 22:16:07.383: INFO: Created: latency-svc-mbpmd May 5 22:16:07.408: INFO: Got endpoints: latency-svc-mbpmd [959.28531ms] May 5 22:16:07.479: INFO: Created: latency-svc-rvsnc May 5 22:16:07.495: INFO: Got endpoints: latency-svc-rvsnc [1.021981667s] May 5 22:16:07.525: INFO: Created: latency-svc-zm5d9 May 5 22:16:07.540: INFO: Got endpoints: latency-svc-zm5d9 [1.024805548s] May 5 22:16:07.561: INFO: Created: latency-svc-qbrxn May 5 22:16:07.577: INFO: Got endpoints: latency-svc-qbrxn [975.142933ms] May 5 22:16:07.627: INFO: Created: latency-svc-s4qqn May 5 22:16:07.628: INFO: Got endpoints: latency-svc-s4qqn [968.79857ms] May 5 22:16:07.666: INFO: Created: latency-svc-v84jk May 5 22:16:07.674: INFO: Got endpoints: latency-svc-v84jk [914.219795ms] May 5 22:16:07.708: INFO: Created: latency-svc-9c9bz May 5 22:16:07.784: INFO: Got endpoints: latency-svc-9c9bz [961.426852ms] May 5 22:16:07.785: INFO: Created: latency-svc-ff7wd May 5 22:16:07.794: INFO: Got endpoints: latency-svc-ff7wd [854.801571ms] May 5 22:16:07.819: INFO: Created: latency-svc-b8fvj May 5 22:16:07.837: INFO: Got endpoints: latency-svc-b8fvj [857.945875ms] May 5 22:16:07.863: INFO: Created: latency-svc-ztq2m May 5 22:16:07.946: INFO: Got endpoints: latency-svc-ztq2m [870.433007ms] May 5 22:16:07.975: INFO: Created: latency-svc-b227w May 5 22:16:07.980: INFO: Got endpoints: latency-svc-b227w [856.8ms] May 5 22:16:08.019: INFO: Created: latency-svc-2dglm May 5 22:16:08.035: INFO: Got endpoints: latency-svc-2dglm [840.37635ms] May 5 22:16:08.109: INFO: Created: latency-svc-c6tx8 May 5 22:16:08.113: INFO: Got endpoints: latency-svc-c6tx8 [874.33953ms] May 5 22:16:08.138: INFO: Created: latency-svc-wc7vb May 5 22:16:08.150: INFO: Got endpoints: latency-svc-wc7vb [868.064982ms] May 5 22:16:08.173: INFO: Created: latency-svc-jdvv9 May 5 22:16:08.187: INFO: Got endpoints: latency-svc-jdvv9 [844.933187ms] May 5 22:16:08.270: INFO: Created: latency-svc-xd5bw May 5 22:16:08.272: INFO: Got endpoints: latency-svc-xd5bw [864.59967ms] May 5 22:16:08.326: INFO: Created: latency-svc-42s55 May 5 22:16:08.343: INFO: Got endpoints: latency-svc-42s55 [848.246827ms] May 5 22:16:08.365: INFO: Created: latency-svc-hcbnm May 5 22:16:08.414: INFO: Got endpoints: latency-svc-hcbnm [873.205614ms] May 5 22:16:08.419: INFO: Created: latency-svc-jfggs May 5 22:16:08.434: INFO: Got endpoints: latency-svc-jfggs [856.812432ms] May 5 22:16:08.473: INFO: Created: latency-svc-zpjvg May 5 22:16:08.507: INFO: Got endpoints: latency-svc-zpjvg [878.180688ms] May 5 22:16:08.563: INFO: Created: latency-svc-jm48k May 5 22:16:08.566: INFO: Got endpoints: latency-svc-jm48k [892.111364ms] May 5 22:16:08.595: INFO: Created: latency-svc-h27wk May 5 22:16:08.609: INFO: Got endpoints: latency-svc-h27wk [824.624684ms] May 5 22:16:08.632: INFO: Created: latency-svc-k2fr8 May 5 22:16:08.700: INFO: Got endpoints: latency-svc-k2fr8 [905.535141ms] May 5 22:16:08.706: INFO: Created: latency-svc-2tzqh May 5 22:16:08.723: INFO: Got endpoints: latency-svc-2tzqh [886.049259ms] May 5 22:16:08.751: INFO: Created: latency-svc-xqv9n May 5 22:16:08.772: INFO: Got endpoints: latency-svc-xqv9n [825.740821ms] May 5 22:16:08.793: INFO: Created: latency-svc-pbrkf May 5 22:16:08.856: INFO: Got endpoints: latency-svc-pbrkf [875.107806ms] May 5 22:16:08.881: INFO: Created: latency-svc-9vv8t May 5 22:16:08.899: INFO: Got endpoints: latency-svc-9vv8t [863.870748ms] May 5 22:16:08.923: INFO: Created: latency-svc-7m7zt May 5 22:16:08.935: INFO: Got endpoints: latency-svc-7m7zt [821.630362ms] May 5 22:16:09.012: INFO: Created: latency-svc-nj8fw May 5 22:16:09.027: INFO: Got endpoints: latency-svc-nj8fw [877.066475ms] May 5 22:16:09.063: INFO: Created: latency-svc-2qgbf May 5 22:16:09.080: INFO: Got endpoints: latency-svc-2qgbf [893.694331ms] May 5 22:16:09.168: INFO: Created: latency-svc-6tl4r May 5 22:16:09.186: INFO: Got endpoints: latency-svc-6tl4r [913.72604ms] May 5 22:16:09.218: INFO: Created: latency-svc-46nkq May 5 22:16:09.236: INFO: Got endpoints: latency-svc-46nkq [892.682883ms] May 5 22:16:09.267: INFO: Created: latency-svc-v2lwk May 5 22:16:09.317: INFO: Got endpoints: latency-svc-v2lwk [903.310839ms] May 5 22:16:09.333: INFO: Created: latency-svc-77h89 May 5 22:16:09.352: INFO: Got endpoints: latency-svc-77h89 [917.462283ms] May 5 22:16:09.396: INFO: Created: latency-svc-b55zf May 5 22:16:09.455: INFO: Got endpoints: latency-svc-b55zf [947.802762ms] May 5 22:16:09.480: INFO: Created: latency-svc-cd2ms May 5 22:16:09.490: INFO: Got endpoints: latency-svc-cd2ms [924.061113ms] May 5 22:16:09.510: INFO: Created: latency-svc-z8p6b May 5 22:16:09.520: INFO: Got endpoints: latency-svc-z8p6b [910.962183ms] May 5 22:16:09.543: INFO: Created: latency-svc-6k9tf May 5 22:16:09.593: INFO: Got endpoints: latency-svc-6k9tf [892.829972ms] May 5 22:16:09.603: INFO: Created: latency-svc-xsz5h May 5 22:16:09.617: INFO: Got endpoints: latency-svc-xsz5h [893.911265ms] May 5 22:16:09.639: INFO: Created: latency-svc-ns2f4 May 5 22:16:09.679: INFO: Got endpoints: latency-svc-ns2f4 [906.784555ms] May 5 22:16:09.760: INFO: Created: latency-svc-v8vkb May 5 22:16:09.767: INFO: Got endpoints: latency-svc-v8vkb [910.992488ms] May 5 22:16:09.795: INFO: Created: latency-svc-fwlbz May 5 22:16:09.811: INFO: Got endpoints: latency-svc-fwlbz [911.902658ms] May 5 22:16:09.832: INFO: Created: latency-svc-rz7l9 May 5 22:16:09.847: INFO: Got endpoints: latency-svc-rz7l9 [911.767392ms] May 5 22:16:09.904: INFO: Created: latency-svc-cm928 May 5 22:16:09.913: INFO: Got endpoints: latency-svc-cm928 [886.404544ms] May 5 22:16:09.960: INFO: Created: latency-svc-k9vf8 May 5 22:16:09.998: INFO: Got endpoints: latency-svc-k9vf8 [917.646295ms] May 5 22:16:10.042: INFO: Created: latency-svc-jkrtw May 5 22:16:10.082: INFO: Got endpoints: latency-svc-jkrtw [896.167454ms] May 5 22:16:10.135: INFO: Created: latency-svc-5g5wk May 5 22:16:10.185: INFO: Got endpoints: latency-svc-5g5wk [949.735072ms] May 5 22:16:10.194: INFO: Created: latency-svc-vmvzh May 5 22:16:10.208: INFO: Got endpoints: latency-svc-vmvzh [891.316743ms] May 5 22:16:10.236: INFO: Created: latency-svc-l44m6 May 5 22:16:10.263: INFO: Got endpoints: latency-svc-l44m6 [910.953687ms] May 5 22:16:10.329: INFO: Created: latency-svc-4hftc May 5 22:16:10.335: INFO: Got endpoints: latency-svc-4hftc [880.47773ms] May 5 22:16:10.356: INFO: Created: latency-svc-gj2b5 May 5 22:16:10.372: INFO: Got endpoints: latency-svc-gj2b5 [881.91198ms] May 5 22:16:10.372: INFO: Latencies: [94.957003ms 171.657997ms 232.134245ms 277.41586ms 371.544065ms 409.380929ms 451.528707ms 525.250859ms 618.324842ms 763.017321ms 781.607556ms 786.425989ms 803.347735ms 816.849166ms 821.630362ms 824.624684ms 825.740821ms 835.262075ms 840.37635ms 844.933187ms 845.418377ms 845.520933ms 845.568124ms 846.191696ms 848.246827ms 851.412787ms 854.801571ms 855.855622ms 856.8ms 856.812432ms 857.716128ms 857.945875ms 861.043621ms 863.870748ms 864.59967ms 865.154312ms 867.358267ms 868.064982ms 870.433007ms 873.205614ms 874.33953ms 875.107806ms 876.271216ms 877.066475ms 878.180688ms 880.47773ms 881.136389ms 881.91198ms 886.049259ms 886.404544ms 887.496374ms 887.641469ms 887.702771ms 887.886886ms 890.025872ms 891.001595ms 891.316743ms 892.111364ms 892.682883ms 892.829972ms 893.694331ms 893.767529ms 893.911265ms 893.916607ms 894.988944ms 896.167454ms 898.358786ms 899.775723ms 900.150707ms 903.310839ms 904.881455ms 905.535141ms 906.071761ms 906.784555ms 907.183162ms 908.184734ms 910.841544ms 910.953687ms 910.962183ms 910.992488ms 911.434991ms 911.624649ms 911.767392ms 911.902658ms 911.967167ms 913.72604ms 913.780506ms 914.219795ms 917.462283ms 917.646295ms 924.061113ms 929.599826ms 929.908046ms 930.191056ms 930.523488ms 930.615229ms 931.685665ms 932.482883ms 935.381973ms 935.899373ms 937.728835ms 942.060026ms 942.118256ms 942.211396ms 942.773089ms 944.10693ms 944.910648ms 947.672308ms 947.802762ms 949.460101ms 949.735072ms 953.776716ms 953.838714ms 954.692956ms 956.35778ms 956.7653ms 959.28531ms 959.394193ms 961.426852ms 962.067669ms 963.718833ms 965.293294ms 966.094223ms 966.49717ms 968.79857ms 972.083167ms 973.35472ms 973.527218ms 975.142933ms 975.2025ms 977.056773ms 977.327349ms 977.523568ms 977.67681ms 978.039016ms 983.739534ms 983.75188ms 987.26431ms 987.991696ms 993.983743ms 994.495472ms 995.549435ms 995.601301ms 996.288034ms 997.38036ms 1.000115966s 1.001154689s 1.003151345s 1.006981275s 1.007142902s 1.007617414s 1.008008783s 1.008399581s 1.009805038s 1.013737452s 1.020559187s 1.021981667s 1.024805548s 1.030317178s 1.034266774s 1.03744459s 1.038214529s 1.049005053s 1.05913696s 1.059598148s 1.075271115s 1.096599898s 1.112761705s 1.114487639s 1.117681642s 1.152303169s 1.18200658s 1.186087669s 1.222712276s 1.242479344s 1.261097717s 1.286055676s 1.292070185s 1.294918659s 1.30687694s 1.428833716s 1.564513552s 1.577430085s 1.649608045s 1.671002735s 1.712418102s 1.749112618s 1.800932514s 1.8083067s 1.927308101s 2.049709151s 2.145090367s 2.166775537s 2.173551155s 2.219108045s 2.265305691s 2.275694888s 2.315836627s 2.328530562s 2.346517836s] May 5 22:16:10.372: INFO: 50 %ile: 937.728835ms May 5 22:16:10.373: INFO: 90 %ile: 1.428833716s May 5 22:16:10.373: INFO: 99 %ile: 2.328530562s May 5 22:16:10.373: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:16:10.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8476" for this suite. • [SLOW TEST:19.335 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":227,"skipped":3570,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:16:10.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:16:17.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8452" for this suite. STEP: Destroying namespace "nsdeletetest-9465" for this suite. May 5 22:16:18.389: INFO: Namespace nsdeletetest-9465 was already deleted STEP: Destroying namespace "nsdeletetest-8321" for this suite. • [SLOW TEST:8.028 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":228,"skipped":3570,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:16:18.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 22:16:18.568: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 22:16:18.615: INFO: Waiting for terminating namespaces to be deleted... May 5 22:16:18.620: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 22:16:18.635: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:16:18.635: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:16:18.635: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:16:18.635: INFO: Container kube-proxy ready: true, restart count 0 May 5 22:16:18.635: INFO: svc-latency-rc-8f4kj from svc-latency-8476 started at 2020-05-05 22:15:51 +0000 UTC (1 container statuses recorded) May 5 22:16:18.635: INFO: Container svc-latency-rc ready: true, restart count 0 May 5 22:16:18.635: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 22:16:18.694: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:16:18.694: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:16:18.694: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 22:16:18.694: INFO: Container kube-bench ready: false, restart count 0 May 5 22:16:18.694: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:16:18.694: INFO: Container kube-proxy ready: true, restart count 0 May 5 22:16:18.694: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 22:16:18.694: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1d7fe5d2-a29c-4900-a8eb-5d5831a193e3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1d7fe5d2-a29c-4900-a8eb-5d5831a193e3 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1d7fe5d2-a29c-4900-a8eb-5d5831a193e3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:16:27.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7448" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.778 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":229,"skipped":3575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:16:27.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5781.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.222.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.222.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.222.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.222.220_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5781.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5781.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5781.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5781.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5781.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.222.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.222.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.222.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.222.220_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:16:35.960: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:35.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:35.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.001: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.219: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.222: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.248: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.253: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:36.356: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:16:41.385: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.395: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.456: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.787: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.804: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.834: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.839: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:41.968: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:16:46.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.367: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.370: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.389: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.394: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.396: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:46.412: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:16:51.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.369: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.373: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.434: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.440: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:51.460: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:16:56.403: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.407: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.411: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.415: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.438: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.440: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.443: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.445: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:16:56.462: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:17:01.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.374: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.389: INFO: Unable to read jessie_udp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.391: INFO: Unable to read jessie_tcp@dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.394: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:01.396: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local from pod dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee: the server could not find the requested resource (get pods dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee) May 5 22:17:02.243: INFO: Lookups using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee failed for: [wheezy_udp@dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@dns-test-service.dns-5781.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_udp@dns-test-service.dns-5781.svc.cluster.local jessie_tcp@dns-test-service.dns-5781.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5781.svc.cluster.local] May 5 22:17:06.429: INFO: DNS probes using dns-5781/dns-test-09d9e320-1400-43e0-bd22-7a9fcd5e77ee succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:17:07.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5781" for this suite. • [SLOW TEST:40.132 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":230,"skipped":3604,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:17:07.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 5 22:17:07.397: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:17:23.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7305" for this suite. • [SLOW TEST:16.645 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":231,"skipped":3615,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:17:23.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 5 22:17:32.138: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 22:17:32.163: INFO: Pod pod-with-poststart-exec-hook still exists May 5 22:17:34.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 22:17:34.379: INFO: Pod pod-with-poststart-exec-hook still exists May 5 22:17:36.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 22:17:36.171: INFO: Pod pod-with-poststart-exec-hook still exists May 5 22:17:38.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 22:17:38.193: INFO: Pod pod-with-poststart-exec-hook still exists May 5 22:17:40.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 5 22:17:40.169: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:17:40.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8243" for this suite. • [SLOW TEST:16.206 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3624,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:17:40.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-7702/secret-test-9689514a-d0c1-4592-8010-8ad7a099f51f STEP: Creating a pod to test consume secrets May 5 22:17:41.112: INFO: Waiting up to 5m0s for pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b" in namespace "secrets-7702" to be "success or failure" May 5 22:17:41.194: INFO: Pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b": Phase="Pending", Reason="", readiness=false. Elapsed: 81.810268ms May 5 22:17:43.198: INFO: Pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085986887s May 5 22:17:45.201: INFO: Pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b": Phase="Running", Reason="", readiness=true. Elapsed: 4.0892466s May 5 22:17:47.204: INFO: Pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092551158s STEP: Saw pod success May 5 22:17:47.204: INFO: Pod "pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b" satisfied condition "success or failure" May 5 22:17:47.207: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b container env-test: STEP: delete the pod May 5 22:17:47.244: INFO: Waiting for pod pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b to disappear May 5 22:17:47.246: INFO: Pod pod-configmaps-e246fd51-395b-4f24-9691-9cee39e7175b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:17:47.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7702" for this suite. • [SLOW TEST:7.076 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3635,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:17:47.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:18:09.458: INFO: Container started at 2020-05-05 22:17:49 +0000 UTC, pod became ready at 2020-05-05 22:18:07 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:18:09.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3136" for this suite. • [SLOW TEST:22.213 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3638,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:18:09.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:18:09.565: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:18:10.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8247" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":235,"skipped":3648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:18:10.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:18:10.740: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:18:12.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:18:14.798: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724313890, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:18:17.789: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:18:17.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6588" for this suite. STEP: Destroying namespace "webhook-6588-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":236,"skipped":3758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:18:18.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 5 22:18:22.703: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902" May 5 22:18:22.703: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902" in namespace "pods-7204" to be "terminated due to deadline exceeded" May 5 22:18:22.715: INFO: Pod "pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902": Phase="Running", Reason="", readiness=true. Elapsed: 11.938503ms May 5 22:18:24.719: INFO: Pod "pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.016117791s May 5 22:18:24.719: INFO: Pod "pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:18:24.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7204" for this suite. • [SLOW TEST:6.707 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3803,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:18:24.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 5 22:18:24.878: INFO: Waiting up to 5m0s for pod "pod-c08aad69-6d06-4866-b69a-ce201a24687c" in namespace "emptydir-4617" to be "success or failure" May 5 22:18:24.888: INFO: Pod "pod-c08aad69-6d06-4866-b69a-ce201a24687c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200829ms May 5 22:18:26.892: INFO: Pod "pod-c08aad69-6d06-4866-b69a-ce201a24687c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014668383s May 5 22:18:28.897: INFO: Pod "pod-c08aad69-6d06-4866-b69a-ce201a24687c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018895709s STEP: Saw pod success May 5 22:18:28.897: INFO: Pod "pod-c08aad69-6d06-4866-b69a-ce201a24687c" satisfied condition "success or failure" May 5 22:18:28.899: INFO: Trying to get logs from node jerma-worker2 pod pod-c08aad69-6d06-4866-b69a-ce201a24687c container test-container: STEP: delete the pod May 5 22:18:28.932: INFO: Waiting for pod pod-c08aad69-6d06-4866-b69a-ce201a24687c to disappear May 5 22:18:28.978: INFO: Pod pod-c08aad69-6d06-4866-b69a-ce201a24687c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:18:28.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4617" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:18:28.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 22:18:29.075: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 22:18:29.109: INFO: Waiting for terminating namespaces to be deleted... May 5 22:18:29.111: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 22:18:29.118: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:18:29.118: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:18:29.118: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:18:29.118: INFO: Container kube-proxy ready: true, restart count 0 May 5 22:18:29.118: INFO: pod-update-activedeadlineseconds-fbbf86a8-fe87-4214-8fe6-a35465a1a902 from pods-7204 started at 2020-05-05 22:18:18 +0000 UTC (1 container statuses recorded) May 5 22:18:29.118: INFO: Container nginx ready: false, restart count 0 May 5 22:18:29.119: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 22:18:29.126: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 22:18:29.126: INFO: Container kube-hunter ready: false, restart count 0 May 5 22:18:29.126: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:18:29.126: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:18:29.126: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 22:18:29.126: INFO: Container kube-bench ready: false, restart count 0 May 5 22:18:29.126: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:18:29.126: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-25f727b0-1faf-4547-b783-0f62a6a8b701 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-25f727b0-1faf-4547-b783-0f62a6a8b701 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-25f727b0-1faf-4547-b783-0f62a6a8b701 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:23:37.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5308" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.490 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":239,"skipped":3839,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:23:37.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-lhh9q in namespace proxy-9771 I0505 22:23:37.663014 7 runners.go:189] Created replication controller with name: proxy-service-lhh9q, namespace: proxy-9771, replica count: 1 I0505 22:23:38.713545 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:23:39.713787 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0505 22:23:40.714043 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:41.714264 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:42.714548 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:43.714781 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:44.715012 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:45.715296 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:46.715525 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:47.715723 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:48.715932 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:49.716162 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0505 22:23:50.716399 7 runners.go:189] proxy-service-lhh9q Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 5 22:23:50.720: INFO: setup took 13.165187373s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 5 22:23:50.726: INFO: (0) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 6.365509ms) May 5 22:23:50.727: INFO: (0) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 6.615519ms) May 5 22:23:50.728: INFO: (0) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 7.428509ms) May 5 22:23:50.728: INFO: (0) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 7.508819ms) May 5 22:23:50.728: INFO: (0) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 7.883245ms) May 5 22:23:50.728: INFO: (0) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 8.054374ms) May 5 22:23:50.730: INFO: (0) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 9.343607ms) May 5 22:23:50.733: INFO: (0) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 12.619197ms) May 5 22:23:50.733: INFO: (0) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 13.20792ms) May 5 22:23:50.734: INFO: (0) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 13.416165ms) May 5 22:23:50.734: INFO: (0) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 14.015553ms) May 5 22:23:50.738: INFO: (0) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 17.710687ms) May 5 22:23:50.738: INFO: (0) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 17.783831ms) May 5 22:23:50.738: INFO: (0) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 18.175107ms) May 5 22:23:50.739: INFO: (0) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 4.177474ms) May 5 22:23:50.749: INFO: (1) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 4.208954ms) May 5 22:23:50.749: INFO: (1) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 7.058426ms) May 5 22:23:50.752: INFO: (1) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 7.116541ms) May 5 22:23:50.752: INFO: (1) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 7.172618ms) May 5 22:23:50.752: INFO: (1) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 7.272324ms) May 5 22:23:50.752: INFO: (1) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 7.234889ms) May 5 22:23:50.753: INFO: (1) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 7.983548ms) May 5 22:23:50.758: INFO: (2) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 4.886671ms) May 5 22:23:50.758: INFO: (2) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.01686ms) May 5 22:23:50.758: INFO: (2) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 5.028352ms) May 5 22:23:50.758: INFO: (2) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 5.117592ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 5.694565ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 5.813836ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 5.990685ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.980937ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 6.023951ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 6.34627ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 6.427787ms) May 5 22:23:50.759: INFO: (2) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 6.446107ms) May 5 22:23:50.760: INFO: (2) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 6.450169ms) May 5 22:23:50.760: INFO: (2) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 6.760095ms) May 5 22:23:50.764: INFO: (3) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 3.829908ms) May 5 22:23:50.764: INFO: (3) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 4.42935ms) May 5 22:23:50.764: INFO: (3) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.45002ms) May 5 22:23:50.765: INFO: (3) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.728847ms) May 5 22:23:50.765: INFO: (3) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 4.79768ms) May 5 22:23:50.765: INFO: (3) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 4.847949ms) May 5 22:23:50.765: INFO: (3) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 5.364821ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 5.456783ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.418404ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 5.404361ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.426751ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 5.41875ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 5.431757ms) May 5 22:23:50.772: INFO: (4) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 5.513415ms) May 5 22:23:50.773: INFO: (4) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 6.068122ms) May 5 22:23:50.773: INFO: (4) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 6.193178ms) May 5 22:23:50.773: INFO: (4) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 6.133273ms) May 5 22:23:50.773: INFO: (4) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 6.187295ms) May 5 22:23:50.773: INFO: (4) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 6.141658ms) May 5 22:23:50.777: INFO: (5) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.69513ms) May 5 22:23:50.777: INFO: (5) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.064001ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 4.428837ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 4.355527ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.681527ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 4.676371ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 4.623844ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.65342ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 4.684274ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 4.803769ms) May 5 22:23:50.778: INFO: (5) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 3.665468ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.671149ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 3.718807ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.708371ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 3.744382ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.787777ms) May 5 22:23:50.782: INFO: (6) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.914883ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 4.048031ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 4.271421ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.466709ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 4.586035ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.672572ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 4.862161ms) May 5 22:23:50.783: INFO: (6) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 4.894102ms) May 5 22:23:50.786: INFO: (7) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 2.79842ms) May 5 22:23:50.786: INFO: (7) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 2.885171ms) May 5 22:23:50.786: INFO: (7) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 2.955181ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.15141ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 3.716193ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.755867ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 3.833458ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 3.81594ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.872728ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 3.865924ms) May 5 22:23:50.787: INFO: (7) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 4.241902ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 4.31467ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 4.467999ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 4.515445ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.505347ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 4.80776ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.882563ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 4.784272ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 4.76528ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 4.81839ms) May 5 22:23:50.794: INFO: (8) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.167832ms) May 5 22:23:50.799: INFO: (9) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 4.521054ms) May 5 22:23:50.799: INFO: (9) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.901145ms) May 5 22:23:50.799: INFO: (9) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 4.936738ms) May 5 22:23:50.799: INFO: (9) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 5.026033ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.047007ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 5.127806ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 5.151745ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 5.090064ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 5.128513ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 5.1102ms) May 5 22:23:50.800: INFO: (9) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 3.681956ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 4.018987ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 3.936874ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 3.998852ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.165929ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.140594ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 4.161558ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 4.210495ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.592183ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 4.640168ms) May 5 22:23:50.806: INFO: (10) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 4.83132ms) May 5 22:23:50.808: INFO: (11) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 1.957009ms) May 5 22:23:50.810: INFO: (11) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.740078ms) May 5 22:23:50.810: INFO: (11) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 3.690248ms) May 5 22:23:50.810: INFO: (11) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.944339ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 4.436533ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 4.570929ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.549948ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 4.667071ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 4.834621ms) May 5 22:23:50.811: INFO: (11) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.835007ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 5.292417ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 5.379885ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 5.54323ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 5.522899ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 5.491332ms) May 5 22:23:50.812: INFO: (11) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 3.149504ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.538221ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.627676ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 3.551912ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 3.58915ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 3.596258ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.651707ms) May 5 22:23:50.816: INFO: (12) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.681756ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.371862ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 4.444076ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 4.360593ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 4.960786ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.982057ms) May 5 22:23:50.817: INFO: (12) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 4.946252ms) May 5 22:23:50.821: INFO: (13) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 3.52321ms) May 5 22:23:50.821: INFO: (13) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 3.576856ms) May 5 22:23:50.821: INFO: (13) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.609267ms) May 5 22:23:50.821: INFO: (13) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 3.564351ms) May 5 22:23:50.821: INFO: (13) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 3.270992ms) May 5 22:23:50.826: INFO: (14) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.277626ms) May 5 22:23:50.826: INFO: (14) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.317066ms) May 5 22:23:50.826: INFO: (14) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.394456ms) May 5 22:23:50.826: INFO: (14) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 3.306832ms) May 5 22:23:50.826: INFO: (14) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: ... (200; 3.411337ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 5.199949ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 5.280452ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 5.551945ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 5.71569ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 5.753725ms) May 5 22:23:50.828: INFO: (14) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 5.742773ms) May 5 22:23:50.830: INFO: (15) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 2.111634ms) May 5 22:23:50.831: INFO: (15) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 2.442428ms) May 5 22:23:50.831: INFO: (15) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 2.492455ms) May 5 22:23:50.832: INFO: (15) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 4.019297ms) May 5 22:23:50.833: INFO: (15) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 4.241989ms) May 5 22:23:50.833: INFO: (15) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 4.292371ms) May 5 22:23:50.833: INFO: (15) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 4.271262ms) May 5 22:23:50.833: INFO: (15) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 4.325367ms) May 5 22:23:50.833: INFO: (15) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 4.435576ms) May 5 22:23:50.835: INFO: (15) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 5.90842ms) May 5 22:23:50.835: INFO: (15) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 5.8959ms) May 5 22:23:50.835: INFO: (15) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 6.03952ms) May 5 22:23:50.835: INFO: (15) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 5.997011ms) May 5 22:23:50.835: INFO: (15) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 6.106142ms) May 5 22:23:50.838: INFO: (16) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:1080/proxy/: test<... (200; 2.589603ms) May 5 22:23:50.838: INFO: (16) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 2.802345ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 4.852239ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 5.12925ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 5.178363ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 5.176553ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 5.150541ms) May 5 22:23:50.840: INFO: (16) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test (200; 5.333143ms) May 5 22:23:50.841: INFO: (16) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 6.383869ms) May 5 22:23:50.841: INFO: (16) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 6.332858ms) May 5 22:23:50.841: INFO: (16) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 6.368978ms) May 5 22:23:50.841: INFO: (16) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 6.382708ms) May 5 22:23:50.841: INFO: (16) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 6.374422ms) May 5 22:23:50.844: INFO: (17) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 2.157407ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.299593ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 3.36091ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 3.597835ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.627107ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 3.544281ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 3.613802ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 3.804329ms) May 5 22:23:50.845: INFO: (17) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 3.88359ms) May 5 22:23:50.846: INFO: (17) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 4.701111ms) May 5 22:23:50.847: INFO: (17) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 5.217411ms) May 5 22:23:50.847: INFO: (17) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 5.758354ms) May 5 22:23:50.847: INFO: (17) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 5.711196ms) May 5 22:23:50.847: INFO: (17) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 5.763292ms) May 5 22:23:50.847: INFO: (17) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 5.747269ms) May 5 22:23:50.850: INFO: (18) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 2.755637ms) May 5 22:23:50.861: INFO: (18) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 13.694989ms) May 5 22:23:50.861: INFO: (18) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 13.276661ms) May 5 22:23:50.861: INFO: (18) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 13.272899ms) May 5 22:23:50.862: INFO: (18) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:462/proxy/: tls qux (200; 14.170478ms) May 5 22:23:50.862: INFO: (18) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 14.147222ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 14.95523ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 15.522863ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 15.438011ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 15.537718ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 15.259813ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 15.323625ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 15.720375ms) May 5 22:23:50.863: INFO: (18) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 15.608638ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw/proxy/: test (200; 3.244489ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 2.793396ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:160/proxy/: foo (200; 2.997322ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/proxy-service-lhh9q-28zmw:162/proxy/: bar (200; 3.591648ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:443/proxy/: test<... (200; 3.281925ms) May 5 22:23:50.867: INFO: (19) /api/v1/namespaces/proxy-9771/pods/http:proxy-service-lhh9q-28zmw:1080/proxy/: ... (200; 3.393622ms) May 5 22:23:50.868: INFO: (19) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname1/proxy/: tls baz (200; 3.998403ms) May 5 22:23:50.868: INFO: (19) /api/v1/namespaces/proxy-9771/pods/https:proxy-service-lhh9q-28zmw:460/proxy/: tls baz (200; 3.358381ms) May 5 22:23:50.869: INFO: (19) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname2/proxy/: bar (200; 3.971474ms) May 5 22:23:50.869: INFO: (19) /api/v1/namespaces/proxy-9771/services/http:proxy-service-lhh9q:portname1/proxy/: foo (200; 4.105757ms) May 5 22:23:50.869: INFO: (19) /api/v1/namespaces/proxy-9771/services/https:proxy-service-lhh9q:tlsportname2/proxy/: tls qux (200; 5.076507ms) May 5 22:23:50.869: INFO: (19) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname2/proxy/: bar (200; 4.624466ms) May 5 22:23:50.869: INFO: (19) /api/v1/namespaces/proxy-9771/services/proxy-service-lhh9q:portname1/proxy/: foo (200; 4.454986ms) STEP: deleting ReplicationController proxy-service-lhh9q in namespace proxy-9771, will wait for the garbage collector to delete the pods May 5 22:23:50.927: INFO: Deleting ReplicationController proxy-service-lhh9q took: 6.636572ms May 5 22:23:51.227: INFO: Terminating ReplicationController proxy-service-lhh9q pods took: 300.27574ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:23:59.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9771" for this suite. • [SLOW TEST:21.757 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":240,"skipped":3859,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:23:59.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-b568b390-e516-4be8-9d6d-bc783c46327c STEP: Creating a pod to test consume secrets May 5 22:23:59.314: INFO: Waiting up to 5m0s for pod "pod-secrets-7a924262-478d-40e4-8675-003227d708ff" in namespace "secrets-7531" to be "success or failure" May 5 22:23:59.321: INFO: Pod "pod-secrets-7a924262-478d-40e4-8675-003227d708ff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.437869ms May 5 22:24:01.325: INFO: Pod "pod-secrets-7a924262-478d-40e4-8675-003227d708ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010614484s May 5 22:24:03.330: INFO: Pod "pod-secrets-7a924262-478d-40e4-8675-003227d708ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014979562s STEP: Saw pod success May 5 22:24:03.330: INFO: Pod "pod-secrets-7a924262-478d-40e4-8675-003227d708ff" satisfied condition "success or failure" May 5 22:24:03.333: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7a924262-478d-40e4-8675-003227d708ff container secret-volume-test: STEP: delete the pod May 5 22:24:03.404: INFO: Waiting for pod pod-secrets-7a924262-478d-40e4-8675-003227d708ff to disappear May 5 22:24:03.471: INFO: Pod pod-secrets-7a924262-478d-40e4-8675-003227d708ff no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:24:03.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7531" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:24:03.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 5 22:24:03.599: INFO: Waiting up to 5m0s for pod "downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5" in namespace "downward-api-8121" to be "success or failure" May 5 22:24:03.602: INFO: Pod "downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.953274ms May 5 22:24:05.606: INFO: Pod "downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007109515s May 5 22:24:07.611: INFO: Pod "downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011187249s STEP: Saw pod success May 5 22:24:07.611: INFO: Pod "downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5" satisfied condition "success or failure" May 5 22:24:07.614: INFO: Trying to get logs from node jerma-worker pod downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5 container dapi-container: STEP: delete the pod May 5 22:24:07.757: INFO: Waiting for pod downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5 to disappear May 5 22:24:07.806: INFO: Pod downward-api-3fb0f074-92ef-42dd-b611-53a6864da9a5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:24:07.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8121" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:24:07.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9910 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9910;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9910 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9910;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9910.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9910.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9910.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9910.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9910.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9910.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9910.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.44.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.44.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.44.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.44.244_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9910 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9910;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9910 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9910;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9910.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9910.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9910.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9910.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9910.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9910.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9910.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9910.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9910.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 244.44.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.44.244_udp@PTR;check="$$(dig +tcp +noall +answer +search 244.44.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.44.244_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:24:14.157: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.160: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.163: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.169: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.172: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.175: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.178: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.202: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.205: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.208: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.214: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.220: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.223: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:14.243: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:19.249: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.256: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.259: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.265: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.294: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.297: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.300: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.303: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.306: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.308: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.311: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.313: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:19.330: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:24.248: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.251: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.255: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.265: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.270: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.288: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.290: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.292: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.298: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.301: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:24.323: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:29.248: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.256: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.259: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.265: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.268: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.271: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.295: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.297: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.300: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.302: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.304: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.309: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.311: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:29.328: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:34.248: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.252: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.256: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.259: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.262: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.270: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.272: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.291: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.293: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.296: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.299: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.301: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.306: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.309: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:34.327: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:39.331: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.366: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.370: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.401: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.405: INFO: Unable to read wheezy_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.408: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.410: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.414: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.523: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.526: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.529: INFO: Unable to read jessie_udp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.532: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910 from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.535: INFO: Unable to read jessie_udp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.538: INFO: Unable to read jessie_tcp@dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.541: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.544: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc from pod dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf: the server could not find the requested resource (get pods dns-test-ee567caf-969e-404a-a909-cd8229ef14cf) May 5 22:24:39.561: INFO: Lookups using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9910 wheezy_tcp@dns-test-service.dns-9910 wheezy_udp@dns-test-service.dns-9910.svc wheezy_tcp@dns-test-service.dns-9910.svc wheezy_udp@_http._tcp.dns-test-service.dns-9910.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9910.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9910 jessie_tcp@dns-test-service.dns-9910 jessie_udp@dns-test-service.dns-9910.svc jessie_tcp@dns-test-service.dns-9910.svc jessie_udp@_http._tcp.dns-test-service.dns-9910.svc jessie_tcp@_http._tcp.dns-test-service.dns-9910.svc] May 5 22:24:44.335: INFO: DNS probes using dns-9910/dns-test-ee567caf-969e-404a-a909-cd8229ef14cf succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:24:44.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9910" for this suite. • [SLOW TEST:36.960 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":243,"skipped":3936,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:24:44.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3481, will wait for the garbage collector to delete the pods May 5 22:24:51.035: INFO: Deleting Job.batch foo took: 6.760512ms May 5 22:24:51.335: INFO: Terminating Job.batch foo pods took: 300.250268ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:25:29.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3481" for this suite. • [SLOW TEST:44.773 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":244,"skipped":3945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:25:29.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:25:29.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7896" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":245,"skipped":3986,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:25:29.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:25:29.884: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:25:37.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4706" for this suite. • [SLOW TEST:8.280 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4012,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:25:37.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:25:38.118: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813" in namespace "projected-1722" to be "success or failure" May 5 22:25:38.137: INFO: Pod "downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813": Phase="Pending", Reason="", readiness=false. Elapsed: 18.98508ms May 5 22:25:40.144: INFO: Pod "downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02526681s May 5 22:25:42.174: INFO: Pod "downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055685566s STEP: Saw pod success May 5 22:25:42.174: INFO: Pod "downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813" satisfied condition "success or failure" May 5 22:25:42.191: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813 container client-container: STEP: delete the pod May 5 22:25:42.330: INFO: Waiting for pod downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813 to disappear May 5 22:25:42.451: INFO: Pod downwardapi-volume-af8a4b3f-1877-47e0-aaf7-3bd346824813 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:25:42.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1722" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:25:42.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-48fe0109-430d-4289-93f6-5899a7b53dae STEP: Creating a pod to test consume secrets May 5 22:25:42.548: INFO: Waiting up to 5m0s for pod "pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2" in namespace "secrets-8969" to be "success or failure" May 5 22:25:42.577: INFO: Pod "pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.529246ms May 5 22:25:44.673: INFO: Pod "pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124852609s May 5 22:25:46.678: INFO: Pod "pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129350059s STEP: Saw pod success May 5 22:25:46.678: INFO: Pod "pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2" satisfied condition "success or failure" May 5 22:25:46.680: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2 container secret-volume-test: STEP: delete the pod May 5 22:25:46.746: INFO: Waiting for pod pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2 to disappear May 5 22:25:46.760: INFO: Pod pod-secrets-59835116-0c3c-4ac6-b935-2349187a02c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:25:46.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8969" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4050,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:25:46.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 22:25:59.307: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 22:25:59.319: INFO: Pod pod-with-prestop-http-hook still exists May 5 22:26:01.319: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 22:26:01.323: INFO: Pod pod-with-prestop-http-hook still exists May 5 22:26:03.319: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 22:26:03.416: INFO: Pod pod-with-prestop-http-hook still exists May 5 22:26:05.319: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 5 22:26:05.323: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:05.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3836" for this suite. • [SLOW TEST:18.571 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4060,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:05.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:26:05.525: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b" in namespace "projected-1079" to be "success or failure" May 5 22:26:05.535: INFO: Pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.504888ms May 5 22:26:07.539: INFO: Pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013517875s May 5 22:26:09.542: INFO: Pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016872825s May 5 22:26:11.546: INFO: Pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020769488s STEP: Saw pod success May 5 22:26:11.546: INFO: Pod "downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b" satisfied condition "success or failure" May 5 22:26:11.549: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b container client-container: STEP: delete the pod May 5 22:26:11.699: INFO: Waiting for pod downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b to disappear May 5 22:26:11.702: INFO: Pod downwardapi-volume-7f02d6ff-d761-4c93-999d-bed4d4640b9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:11.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1079" for this suite. • [SLOW TEST:6.381 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4072,"failed":0} [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:11.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 5 22:26:11.865: INFO: Waiting up to 5m0s for pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3" in namespace "containers-8943" to be "success or failure" May 5 22:26:11.905: INFO: Pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 39.889794ms May 5 22:26:14.003: INFO: Pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137952275s May 5 22:26:16.010: INFO: Pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144475001s May 5 22:26:18.012: INFO: Pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147388097s STEP: Saw pod success May 5 22:26:18.013: INFO: Pod "client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3" satisfied condition "success or failure" May 5 22:26:18.015: INFO: Trying to get logs from node jerma-worker pod client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3 container test-container: STEP: delete the pod May 5 22:26:18.031: INFO: Waiting for pod client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3 to disappear May 5 22:26:18.036: INFO: Pod client-containers-1735cfcc-861a-4744-af60-ed2e87c27aa3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:18.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8943" for this suite. • [SLOW TEST:6.324 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4072,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:18.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5424.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5424.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5424.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5424.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 5 22:26:24.292: INFO: DNS probes using dns-5424/dns-test-28719198-524c-42a8-8a35-dc40a6b6fdc2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:24.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5424" for this suite. • [SLOW TEST:6.371 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":252,"skipped":4169,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:24.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 5 22:26:24.461: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 5 22:26:24.587: INFO: Waiting for terminating namespaces to be deleted... May 5 22:26:24.590: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 5 22:26:24.595: INFO: pod-qos-class-03637aff-2bf0-4219-8c06-ee5c5d6f28a6 from pods-7896 started at 2020-05-05 22:25:29 +0000 UTC (1 container statuses recorded) May 5 22:26:24.595: INFO: Container nginx ready: true, restart count 0 May 5 22:26:24.595: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:26:24.595: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:26:24.595: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:26:24.595: INFO: Container kube-proxy ready: true, restart count 0 May 5 22:26:24.595: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 5 22:26:24.600: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:26:24.600: INFO: Container kindnet-cni ready: true, restart count 0 May 5 22:26:24.600: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 5 22:26:24.600: INFO: Container kube-bench ready: false, restart count 0 May 5 22:26:24.600: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 5 22:26:24.600: INFO: Container kube-proxy ready: true, restart count 0 May 5 22:26:24.600: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 5 22:26:24.600: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 5 22:26:24.912: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 5 22:26:24.912: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 5 22:26:24.912: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 5 22:26:24.912: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 May 5 22:26:24.912: INFO: Pod pod-qos-class-03637aff-2bf0-4219-8c06-ee5c5d6f28a6 requesting resource cpu=100m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. May 5 22:26:24.912: INFO: Creating a pod which consumes cpu=11060m on Node jerma-worker May 5 22:26:24.917: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c.160c4239e229d9ec], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9167/filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c.160c423a764736b5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c.160c423aee53f465], Reason = [Created], Message = [Created container filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c] STEP: Considering event: Type = [Normal], Name = [filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c.160c423afd0ace67], Reason = [Started], Message = [Started container filler-pod-c982c8a5-a604-481e-a742-0674a8b4b31c] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8.160c4239e337c1af], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9167/filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8.160c423a8929c91f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8.160c423af18ee186], Reason = [Created], Message = [Created container filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8] STEP: Considering event: Type = [Normal], Name = [filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8.160c423b00a163d1], Reason = [Started], Message = [Started container filler-pod-ee6f4f6f-66ee-4f3c-acb8-d29282db4cd8] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c423b49c8a8a4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:32.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9167" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.625 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":253,"skipped":4175,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:32.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 5 22:26:32.155: INFO: >>> kubeConfig: /root/.kube/config May 5 22:26:34.101: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:26:44.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8725" for this suite. • [SLOW TEST:12.696 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":254,"skipped":4181,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:26:44.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 5 22:26:44.852: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 5 22:26:44.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:50.378: INFO: stderr: "" May 5 22:26:50.378: INFO: stdout: "service/agnhost-slave created\n" May 5 22:26:50.378: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 5 22:26:50.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:50.659: INFO: stderr: "" May 5 22:26:50.659: INFO: stdout: "service/agnhost-master created\n" May 5 22:26:50.659: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 5 22:26:50.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:50.936: INFO: stderr: "" May 5 22:26:50.936: INFO: stdout: "service/frontend created\n" May 5 22:26:50.936: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 5 22:26:50.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:51.193: INFO: stderr: "" May 5 22:26:51.193: INFO: stdout: "deployment.apps/frontend created\n" May 5 22:26:51.193: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 22:26:51.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:51.512: INFO: stderr: "" May 5 22:26:51.512: INFO: stdout: "deployment.apps/agnhost-master created\n" May 5 22:26:51.512: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 5 22:26:51.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9341' May 5 22:26:51.846: INFO: stderr: "" May 5 22:26:51.846: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 5 22:26:51.846: INFO: Waiting for all frontend pods to be Running. May 5 22:27:01.896: INFO: Waiting for frontend to serve content. May 5 22:27:01.908: INFO: Trying to add a new entry to the guestbook. May 5 22:27:01.917: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 5 22:27:01.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.158: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 5 22:27:02.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.331: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 22:27:02.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.444: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.444: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 22:27:02.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.546: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.546: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 5 22:27:02.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.642: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 5 22:27:02.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9341' May 5 22:27:02.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 5 22:27:02.758: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:27:02.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9341" for this suite. • [SLOW TEST:18.027 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":255,"skipped":4190,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:27:02.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:27:02.830: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 5 22:27:07.838: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 22:27:07.838: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 5 22:27:09.842: INFO: Creating deployment "test-rollover-deployment" May 5 22:27:09.862: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 5 22:27:11.868: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 5 22:27:11.875: INFO: Ensure that both replica sets have 1 created replica May 5 22:27:11.880: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 5 22:27:11.886: INFO: Updating deployment test-rollover-deployment May 5 22:27:11.886: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 5 22:27:13.923: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 5 22:27:13.928: INFO: Make sure deployment "test-rollover-deployment" is complete May 5 22:27:14.161: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:14.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:16.168: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:16.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:18.172: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:18.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:20.168: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:20.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:22.169: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:22.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:24.169: INFO: all replica sets need to contain the pod-template-hash label May 5 22:27:24.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314429, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:27:26.169: INFO: May 5 22:27:26.169: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 22:27:26.207: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9440 /apis/apps/v1/namespaces/deployment-9440/deployments/test-rollover-deployment 13b4ff2c-e9f9-4dab-b2f2-8106677a0c1d 13697361 2 2020-05-05 22:27:09 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c7488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-05 22:27:09 +0000 UTC,LastTransitionTime:2020-05-05 22:27:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-05 22:27:25 +0000 UTC,LastTransitionTime:2020-05-05 22:27:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 5 22:27:26.211: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9440 /apis/apps/v1/namespaces/deployment-9440/replicasets/test-rollover-deployment-574d6dfbff aa91633a-9f8f-44cb-b9af-03e6fb70cbc8 13697350 2 2020-05-05 22:27:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 13b4ff2c-e9f9-4dab-b2f2-8106677a0c1d 0xc0031c7a47 0xc0031c7a48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c7ab8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 22:27:26.211: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 5 22:27:26.211: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9440 /apis/apps/v1/namespaces/deployment-9440/replicasets/test-rollover-controller 8f554987-ffe9-4f8b-9678-e8036b0f7971 13697360 2 2020-05-05 22:27:02 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 13b4ff2c-e9f9-4dab-b2f2-8106677a0c1d 0xc0031c792f 0xc0031c7950}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0031c79d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 22:27:26.211: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9440 /apis/apps/v1/namespaces/deployment-9440/replicasets/test-rollover-deployment-f6c94f66c 21169f99-41eb-4181-9cb0-6b0e27429a05 13697291 2 2020-05-05 22:27:09 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 13b4ff2c-e9f9-4dab-b2f2-8106677a0c1d 0xc0031c7b20 0xc0031c7b21}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031c7b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 22:27:26.215: INFO: Pod "test-rollover-deployment-574d6dfbff-rrl9f" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-rrl9f test-rollover-deployment-574d6dfbff- deployment-9440 /api/v1/namespaces/deployment-9440/pods/test-rollover-deployment-574d6dfbff-rrl9f a2196a2e-280a-4afe-bcbd-6b0d3ea0e04a 13697309 0 2020-05-05 22:27:11 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff aa91633a-9f8f-44cb-b9af-03e6fb70cbc8 0xc0047f02d7 0xc0047f02d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ckpt6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ckpt6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ckpt6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:27:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:27:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:27:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:27:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.71,StartTime:2020-05-05 22:27:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:27:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8cc3a1c9f430ccf0c493f305873468f6ac20dfabe6ec0cb463b3abc13556574f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:27:26.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9440" for this suite. • [SLOW TEST:23.456 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":256,"skipped":4199,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:27:26.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 5 22:27:34.348: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:34.374: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:36.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:36.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:38.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:38.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:40.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:40.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:42.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:42.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:44.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:44.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:46.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:46.379: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:48.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:48.378: INFO: Pod pod-with-prestop-exec-hook still exists May 5 22:27:50.374: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 5 22:27:50.407: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:27:50.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6624" for this suite. • [SLOW TEST:24.214 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4204,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:27:50.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:27:51.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:27:53.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314471, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314471, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314471, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314471, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:27:56.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 5 22:28:00.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3598 to-be-attached-pod -i -c=container1' May 5 22:28:00.939: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:00.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3598" for this suite. STEP: Destroying namespace "webhook-3598-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.620 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":258,"skipped":4208,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:01.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:28:01.799: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:28:03.810: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314481, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314481, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314481, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314481, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:28:06.854: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:07.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2897" for this suite. STEP: Destroying namespace "webhook-2897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.103 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":259,"skipped":4219,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:07.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-425/configmap-test-ab0d3959-1723-4c56-b3f6-c347458eaf1c STEP: Creating a pod to test consume configMaps May 5 22:28:07.517: INFO: Waiting up to 5m0s for pod "pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf" in namespace "configmap-425" to be "success or failure" May 5 22:28:07.530: INFO: Pod "pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.408674ms May 5 22:28:09.645: INFO: Pod "pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127773731s May 5 22:28:11.649: INFO: Pod "pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.13125652s STEP: Saw pod success May 5 22:28:11.649: INFO: Pod "pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf" satisfied condition "success or failure" May 5 22:28:11.651: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf container env-test: STEP: delete the pod May 5 22:28:11.790: INFO: Waiting for pod pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf to disappear May 5 22:28:11.805: INFO: Pod pod-configmaps-ada0a82d-004b-47c0-8a48-aa04e070abbf no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:11.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-425" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:11.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 5 22:28:12.975: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 5 22:28:14.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314493, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 5 22:28:16.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314493, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724314492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 5 22:28:20.076: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:28:20.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6316-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:21.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-37" for this suite. STEP: Destroying namespace "webhook-37-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.662 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":261,"skipped":4279,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:21.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:26.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7239" for this suite. • [SLOW TEST:5.147 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":262,"skipped":4294,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:26.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 5 22:28:26.693: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:26.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9751" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":263,"skipped":4297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0505 22:28:39.824765 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 5 22:28:39.824: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:39.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8558" for this suite. • [SLOW TEST:13.379 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":264,"skipped":4333,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:40.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:28:40.436: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 5 22:28:45.655: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 5 22:28:45.655: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 5 22:28:45.899: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9548 /apis/apps/v1/namespaces/deployment-9548/deployments/test-cleanup-deployment 2a9f7716-8e95-4332-a3aa-14b802c770b0 13698103 1 2020-05-05 22:28:45 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e30378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 5 22:28:46.114: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9548 /apis/apps/v1/namespaces/deployment-9548/replicasets/test-cleanup-deployment-55ffc6b7b6 66e20971-52a7-4eb8-8828-3185c2dcbc48 13698111 1 2020-05-05 22:28:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2a9f7716-8e95-4332-a3aa-14b802c770b0 0xc002c97f17 0xc002c97f18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c97f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 5 22:28:46.114: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 5 22:28:46.114: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9548 /apis/apps/v1/namespaces/deployment-9548/replicasets/test-cleanup-controller c665e1e2-21df-408a-8782-ee1fb6b322bf 13698105 1 2020-05-05 22:28:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2a9f7716-8e95-4332-a3aa-14b802c770b0 0xc002c97e47 0xc002c97e48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c97ea8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 5 22:28:46.299: INFO: Pod "test-cleanup-controller-9wjws" is available: &Pod{ObjectMeta:{test-cleanup-controller-9wjws test-cleanup-controller- deployment-9548 /api/v1/namespaces/deployment-9548/pods/test-cleanup-controller-9wjws f3f425c5-2583-4884-90e7-fd6233781395 13698081 0 2020-05-05 22:28:40 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller c665e1e2-21df-408a-8782-ee1fb6b322bf 0xc002e306e7 0xc002e306e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zb8ls,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zb8ls,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zb8ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:28:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:28:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:28:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:28:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.81,StartTime:2020-05-05 22:28:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-05 22:28:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1cd599f697e059096881b2979f1020f9ff35e565f0e04d1c704e9e8ef5724736,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 5 22:28:46.300: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-sxc7j" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-sxc7j test-cleanup-deployment-55ffc6b7b6- deployment-9548 /api/v1/namespaces/deployment-9548/pods/test-cleanup-deployment-55ffc6b7b6-sxc7j 2a13e106-a188-4e7d-9d07-26381b8440a5 13698112 0 2020-05-05 22:28:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 66e20971-52a7-4eb8-8828-3185c2dcbc48 0xc002e30887 0xc002e30888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zb8ls,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zb8ls,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zb8ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-05 22:28:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:46.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9548" for this suite. • [SLOW TEST:6.599 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":265,"skipped":4336,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:46.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 5 22:28:52.956: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:53.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7391" for this suite. • [SLOW TEST:6.501 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4345,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:53.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 5 22:28:53.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8" in namespace "projected-5442" to be "success or failure" May 5 22:28:53.340: INFO: Pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.800178ms May 5 22:28:55.344: INFO: Pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00689261s May 5 22:28:57.412: INFO: Pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074358503s May 5 22:28:59.415: INFO: Pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077570479s STEP: Saw pod success May 5 22:28:59.415: INFO: Pod "downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8" satisfied condition "success or failure" May 5 22:28:59.417: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8 container client-container: STEP: delete the pod May 5 22:28:59.623: INFO: Waiting for pod downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8 to disappear May 5 22:28:59.640: INFO: Pod downwardapi-volume-120ef7b1-f03b-4caf-8b61-f0531d3023c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:28:59.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5442" for this suite. • [SLOW TEST:6.377 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4354,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:28:59.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-76537b05-8777-4fb5-9110-4abf66ce903b STEP: Creating a pod to test consume secrets May 5 22:28:59.810: INFO: Waiting up to 5m0s for pod "pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987" in namespace "secrets-3578" to be "success or failure" May 5 22:28:59.875: INFO: Pod "pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987": Phase="Pending", Reason="", readiness=false. Elapsed: 64.368222ms May 5 22:29:01.878: INFO: Pod "pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068136888s May 5 22:29:03.882: INFO: Pod "pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071937523s STEP: Saw pod success May 5 22:29:03.882: INFO: Pod "pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987" satisfied condition "success or failure" May 5 22:29:03.885: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987 container secret-volume-test: STEP: delete the pod May 5 22:29:03.906: INFO: Waiting for pod pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987 to disappear May 5 22:29:03.946: INFO: Pod pod-secrets-436c2d26-564f-487d-9d8c-e9201fbb6987 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:29:03.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3578" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:29:03.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 5 22:29:04.163: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix232635748/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:29:04.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4388" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":269,"skipped":4428,"failed":0} SSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:29:04.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:29:04.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8443" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":270,"skipped":4431,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:29:04.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0d247722-fbd6-47b5-a9e7-b0fedd74206a STEP: Creating a pod to test consume configMaps May 5 22:29:04.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af" in namespace "projected-5911" to be "success or failure" May 5 22:29:04.446: INFO: Pod "pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.329409ms May 5 22:29:06.483: INFO: Pod "pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039651754s May 5 22:29:08.486: INFO: Pod "pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043341508s STEP: Saw pod success May 5 22:29:08.486: INFO: Pod "pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af" satisfied condition "success or failure" May 5 22:29:08.495: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af container projected-configmap-volume-test: STEP: delete the pod May 5 22:29:08.571: INFO: Waiting for pod pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af to disappear May 5 22:29:08.609: INFO: Pod pod-projected-configmaps-453a6854-b8a1-43f0-8506-11d9067c44af no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:29:08.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5911" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4435,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:29:08.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a95036b6-d503-4e80-88e9-fe0368a875a0 in namespace container-probe-858 May 5 22:29:14.734: INFO: Started pod busybox-a95036b6-d503-4e80-88e9-fe0368a875a0 in namespace container-probe-858 STEP: checking the pod's current state and verifying that restartCount is present May 5 22:29:14.737: INFO: Initial restart count of pod busybox-a95036b6-d503-4e80-88e9-fe0368a875a0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:33:15.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-858" for this suite. • [SLOW TEST:246.931 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4447,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:33:15.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 5 22:33:15.740: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c15c0cb0-b574-4fc4-99f8-da57f3fdc17f", Controller:(*bool)(0xc0046adc7a), BlockOwnerDeletion:(*bool)(0xc0046adc7b)}} May 5 22:33:15.754: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9753ccab-64c6-4c75-8560-81699e68e272", Controller:(*bool)(0xc00323f3e2), BlockOwnerDeletion:(*bool)(0xc00323f3e3)}} May 5 22:33:15.848: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"55f6a7ed-f86f-4114-8188-95f42aa32659", Controller:(*bool)(0xc0046ade2a), BlockOwnerDeletion:(*bool)(0xc0046ade2b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:33:20.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9296" for this suite. • [SLOW TEST:5.378 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":273,"skipped":4465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:33:20.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 5 22:33:20.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-2497 -- logs-generator --log-lines-total 100 --run-duration 20s' May 5 22:33:21.101: INFO: stderr: "" May 5 22:33:21.101: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 5 22:33:21.101: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 5 22:33:21.101: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2497" to be "running and ready, or succeeded" May 5 22:33:21.122: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.203887ms May 5 22:33:23.126: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024171322s May 5 22:33:25.130: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.02813087s May 5 22:33:25.130: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 5 22:33:25.130: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 5 22:33:25.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497' May 5 22:33:25.247: INFO: stderr: "" May 5 22:33:25.247: INFO: stdout: "I0505 22:33:23.568944 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/62f 343\nI0505 22:33:23.769102 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/v6h 223\nI0505 22:33:23.969320 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/8j7v 322\nI0505 22:33:24.169102 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/cqz 373\nI0505 22:33:24.369346 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/tvm6 310\nI0505 22:33:24.569133 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/dz8 569\nI0505 22:33:24.769345 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/cpm7 281\nI0505 22:33:24.969323 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/92x9 221\nI0505 22:33:25.169312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s5l 599\n" STEP: limiting log lines May 5 22:33:25.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497 --tail=1' May 5 22:33:25.358: INFO: stderr: "" May 5 22:33:25.358: INFO: stdout: "I0505 22:33:25.169312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s5l 599\n" May 5 22:33:25.358: INFO: got output "I0505 22:33:25.169312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s5l 599\n" STEP: limiting log bytes May 5 22:33:25.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497 --limit-bytes=1' May 5 22:33:25.514: INFO: stderr: "" May 5 22:33:25.514: INFO: stdout: "I" May 5 22:33:25.514: INFO: got output "I" STEP: exposing timestamps May 5 22:33:25.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497 --tail=1 --timestamps' May 5 22:33:25.634: INFO: stderr: "" May 5 22:33:25.634: INFO: stdout: "2020-05-05T22:33:25.569415853Z I0505 22:33:25.569231 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/9kcj 552\n" May 5 22:33:25.634: INFO: got output "2020-05-05T22:33:25.569415853Z I0505 22:33:25.569231 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/9kcj 552\n" STEP: restricting to a time range May 5 22:33:28.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497 --since=1s' May 5 22:33:28.240: INFO: stderr: "" May 5 22:33:28.240: INFO: stdout: "I0505 22:33:27.369332 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/7g4 292\nI0505 22:33:27.569365 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5chq 358\nI0505 22:33:27.769434 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/lbhm 240\nI0505 22:33:27.969337 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9n96 275\nI0505 22:33:28.169426 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/75j 316\n" May 5 22:33:28.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2497 --since=24h' May 5 22:33:28.352: INFO: stderr: "" May 5 22:33:28.352: INFO: stdout: "I0505 22:33:23.568944 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/62f 343\nI0505 22:33:23.769102 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/v6h 223\nI0505 22:33:23.969320 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/8j7v 322\nI0505 22:33:24.169102 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/cqz 373\nI0505 22:33:24.369346 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/tvm6 310\nI0505 22:33:24.569133 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/dz8 569\nI0505 22:33:24.769345 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/cpm7 281\nI0505 22:33:24.969323 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/92x9 221\nI0505 22:33:25.169312 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/s5l 599\nI0505 22:33:25.369067 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/776b 293\nI0505 22:33:25.569231 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/9kcj 552\nI0505 22:33:25.769398 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/b2c 279\nI0505 22:33:25.969402 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/828q 322\nI0505 22:33:26.169409 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/4qt8 227\nI0505 22:33:26.369355 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/vjp 467\nI0505 22:33:26.569345 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/8fb 596\nI0505 22:33:26.769322 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/x2p 259\nI0505 22:33:26.969360 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/p6w 556\nI0505 22:33:27.169103 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/cslg 220\nI0505 22:33:27.369332 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/7g4 292\nI0505 22:33:27.569365 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/5chq 358\nI0505 22:33:27.769434 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/lbhm 240\nI0505 22:33:27.969337 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/9n96 275\nI0505 22:33:28.169426 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/75j 316\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 5 22:33:28.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2497' May 5 22:33:39.534: INFO: stderr: "" May 5 22:33:39.534: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:33:39.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2497" for this suite. • [SLOW TEST:18.633 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":274,"skipped":4499,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:33:39.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-13ba56ef-fcbe-4bbb-802d-5ebd335e35d5 STEP: Creating secret with name s-test-opt-upd-da6c155b-e462-497c-9a08-5caa2a08fdc3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-13ba56ef-fcbe-4bbb-802d-5ebd335e35d5 STEP: Updating secret s-test-opt-upd-da6c155b-e462-497c-9a08-5caa2a08fdc3 STEP: Creating secret with name s-test-opt-create-933d65df-9998-4d91-bb5f-2631fcb18e66 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:33:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6958" for this suite. • [SLOW TEST:12.436 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4506,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:33:51.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 5 22:33:52.131: INFO: Waiting up to 5m0s for pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3" in namespace "containers-2802" to be "success or failure" May 5 22:33:52.134: INFO: Pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.359634ms May 5 22:33:54.153: INFO: Pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021966657s May 5 22:33:56.213: INFO: Pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3": Phase="Running", Reason="", readiness=true. Elapsed: 4.082610225s May 5 22:33:58.218: INFO: Pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086669423s STEP: Saw pod success May 5 22:33:58.218: INFO: Pod "client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3" satisfied condition "success or failure" May 5 22:33:58.221: INFO: Trying to get logs from node jerma-worker pod client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3 container test-container: STEP: delete the pod May 5 22:33:58.347: INFO: Waiting for pod client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3 to disappear May 5 22:33:58.350: INFO: Pod client-containers-1e735ff1-12c1-4bbb-a6ba-6b41e31ce7e3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:33:58.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2802" for this suite. • [SLOW TEST:6.360 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4528,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:33:58.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-6c790792-c3c9-4e86-9159-bac3b6f17048 in namespace container-probe-8359 May 5 22:34:02.552: INFO: Started pod test-webserver-6c790792-c3c9-4e86-9159-bac3b6f17048 in namespace container-probe-8359 STEP: checking the pod's current state and verifying that restartCount is present May 5 22:34:02.554: INFO: Initial restart count of pod test-webserver-6c790792-c3c9-4e86-9159-bac3b6f17048 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:38:03.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8359" for this suite. • [SLOW TEST:245.146 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4532,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 5 22:38:03.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 5 22:38:08.796: INFO: Successfully updated pod "labelsupdated9f7b1d0-c415-427b-ba0d-6968d72ab93e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 5 22:38:10.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7530" for this suite. • [SLOW TEST:7.318 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4552,"failed":0} SSSSSSSSSSSSMay 5 22:38:10.823: INFO: Running AfterSuite actions on all nodes May 5 22:38:10.823: INFO: Running AfterSuite actions on node 1 May 5 22:38:10.823: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 5438.576 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS