I0205 21:08:43.835994 9 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0205 21:08:43.837364 9 e2e.go:109] Starting e2e run "f2161c11-d3e7-47a3-aafe-b2fe567d349b" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580936922 - Will randomize all specs Will run 278 of 4814 specs Feb 5 21:08:43.905: INFO: >>> kubeConfig: /root/.kube/config Feb 5 21:08:43.911: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 5 21:08:43.957: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 5 21:08:44.032: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 5 21:08:44.033: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 5 21:08:44.033: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 5 21:08:44.058: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 5 21:08:44.058: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 5 21:08:44.058: INFO: e2e test version: v1.17.0 Feb 5 21:08:44.063: INFO: kube-apiserver version: v1.17.0 Feb 5 21:08:44.063: INFO: >>> kubeConfig: /root/.kube/config Feb 5 21:08:44.075: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:08:44.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Feb 5 21:08:44.155: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:08:44.707: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:08:46.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:08:48.739: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:08:50.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533724, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:08:53.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 5 21:08:59.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-9723 to-be-attached-pod -i -c=container1' Feb 5 21:09:01.755: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:01.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9723" for this suite. STEP: Destroying namespace "webhook-9723-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":1,"skipped":19,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:01.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-07c5706f-7532-4f50-9bba-b46c8b6e7f90 STEP: Creating a pod to test consume secrets Feb 5 21:09:02.289: INFO: Waiting up to 5m0s for pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b" in namespace "secrets-3495" to be "success or failure" Feb 5 21:09:02.306: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.420385ms Feb 5 21:09:04.314: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025237791s Feb 5 21:09:06.322: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03353694s Feb 5 21:09:08.330: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041377917s Feb 5 21:09:10.334: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045403745s Feb 5 21:09:12.346: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056745876s STEP: Saw pod success Feb 5 21:09:12.346: INFO: Pod "pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b" satisfied condition "success or failure" Feb 5 21:09:12.352: INFO: Trying to get logs from node jerma-node pod pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b container secret-volume-test: STEP: delete the pod Feb 5 21:09:12.413: INFO: Waiting for pod pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b to disappear Feb 5 21:09:12.419: INFO: Pod pod-secrets-fc150b2a-8c6f-4827-943b-3e933e75c90b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:12.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3495" for this suite. STEP: Destroying namespace "secret-namespace-8536" for this suite. • [SLOW TEST:10.526 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":35,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:12.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 5 21:09:12.702: INFO: Waiting up to 5m0s for pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5" in namespace "emptydir-5302" to be "success or failure" Feb 5 21:09:12.714: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.814963ms Feb 5 21:09:14.723: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021173054s Feb 5 21:09:16.731: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029184327s Feb 5 21:09:18.738: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036033075s Feb 5 21:09:20.836: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.134655495s STEP: Saw pod success Feb 5 21:09:20.837: INFO: Pod "pod-21b09c98-a6a5-4478-94f3-c0398af5fae5" satisfied condition "success or failure" Feb 5 21:09:20.842: INFO: Trying to get logs from node jerma-node pod pod-21b09c98-a6a5-4478-94f3-c0398af5fae5 container test-container: STEP: delete the pod Feb 5 21:09:20.980: INFO: Waiting for pod pod-21b09c98-a6a5-4478-94f3-c0398af5fae5 to disappear Feb 5 21:09:20.994: INFO: Pod pod-21b09c98-a6a5-4478-94f3-c0398af5fae5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:20.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5302" for this suite. • [SLOW TEST:8.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:21.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 5 21:09:30.289: INFO: Successfully updated pod "labelsupdatee4f912ab-15e2-4c11-8faa-c431aa81d028" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:32.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1083" for this suite. • [SLOW TEST:11.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":74,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:32.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Feb 5 21:09:32.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 5 21:09:32.598: INFO: stderr: "" Feb 5 21:09:32.598: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:32.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-252" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":5,"skipped":77,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:32.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:09:32.775: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1399 I0205 21:09:32.807952 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1399, replica count: 1 I0205 21:09:33.859311 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:34.859805 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:35.860755 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:36.861414 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:37.861893 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:38.862754 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:39.863333 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:09:40.863939 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 5 21:09:41.055: INFO: Created: latency-svc-9fh87 Feb 5 21:09:41.060: INFO: Got endpoints: latency-svc-9fh87 [96.288842ms] Feb 5 21:09:41.107: INFO: Created: latency-svc-9s5g4 Feb 5 21:09:41.109: INFO: Got endpoints: latency-svc-9s5g4 [46.926779ms] Feb 5 21:09:41.135: INFO: Created: latency-svc-fg8wh Feb 5 21:09:41.139: INFO: Got endpoints: latency-svc-fg8wh [76.604748ms] Feb 5 21:09:41.237: INFO: Created: latency-svc-gtlpr Feb 5 21:09:41.278: INFO: Got endpoints: latency-svc-gtlpr [215.819402ms] Feb 5 21:09:41.283: INFO: Created: latency-svc-w9kfv Feb 5 21:09:41.466: INFO: Got endpoints: latency-svc-w9kfv [404.628726ms] Feb 5 21:09:41.485: INFO: Created: latency-svc-mwgpr Feb 5 21:09:41.503: INFO: Got endpoints: latency-svc-mwgpr [441.690242ms] Feb 5 21:09:41.558: INFO: Created: latency-svc-qkdfp Feb 5 21:09:41.619: INFO: Got endpoints: latency-svc-qkdfp [556.228797ms] Feb 5 21:09:41.651: INFO: Created: latency-svc-lxx2r Feb 5 21:09:41.656: INFO: Got endpoints: latency-svc-lxx2r [594.170599ms] Feb 5 21:09:41.715: INFO: Created: latency-svc-f7j5d Feb 5 21:09:41.829: INFO: Got endpoints: latency-svc-f7j5d [766.885522ms] Feb 5 21:09:41.874: INFO: Created: latency-svc-nnb5g Feb 5 21:09:41.879: INFO: Got endpoints: latency-svc-nnb5g [817.157506ms] Feb 5 21:09:41.976: INFO: Created: latency-svc-vnsh5 Feb 5 21:09:42.018: INFO: Created: latency-svc-hf59c Feb 5 21:09:42.027: INFO: Got endpoints: latency-svc-vnsh5 [964.239782ms] Feb 5 21:09:42.074: INFO: Got endpoints: latency-svc-hf59c [1.011764858s] Feb 5 21:09:42.080: INFO: Created: latency-svc-4mvbl Feb 5 21:09:42.169: INFO: Got endpoints: latency-svc-4mvbl [1.107411868s] Feb 5 21:09:42.225: INFO: Created: latency-svc-mm6d6 Feb 5 21:09:42.263: INFO: Got endpoints: latency-svc-mm6d6 [1.201406655s] Feb 5 21:09:42.310: INFO: Created: latency-svc-csmfn Feb 5 21:09:42.317: INFO: Got endpoints: latency-svc-csmfn [1.256019807s] Feb 5 21:09:42.338: INFO: Created: latency-svc-kd97v Feb 5 21:09:42.342: INFO: Got endpoints: latency-svc-kd97v [1.281336225s] Feb 5 21:09:42.376: INFO: Created: latency-svc-tr7lc Feb 5 21:09:42.377: INFO: Got endpoints: latency-svc-tr7lc [1.267776616s] Feb 5 21:09:42.392: INFO: Created: latency-svc-cf5tt Feb 5 21:09:42.414: INFO: Got endpoints: latency-svc-cf5tt [1.274924205s] Feb 5 21:09:42.500: INFO: Created: latency-svc-pzmd8 Feb 5 21:09:42.545: INFO: Got endpoints: latency-svc-pzmd8 [1.267251587s] Feb 5 21:09:42.663: INFO: Created: latency-svc-h2vqz Feb 5 21:09:42.712: INFO: Got endpoints: latency-svc-h2vqz [1.245739419s] Feb 5 21:09:42.724: INFO: Created: latency-svc-ncnfh Feb 5 21:09:42.749: INFO: Got endpoints: latency-svc-ncnfh [1.245980526s] Feb 5 21:09:42.840: INFO: Created: latency-svc-6lskz Feb 5 21:09:42.859: INFO: Got endpoints: latency-svc-6lskz [1.240409658s] Feb 5 21:09:42.861: INFO: Created: latency-svc-vb7b8 Feb 5 21:09:42.906: INFO: Got endpoints: latency-svc-vb7b8 [1.250154074s] Feb 5 21:09:42.908: INFO: Created: latency-svc-vxk4d Feb 5 21:09:42.911: INFO: Got endpoints: latency-svc-vxk4d [1.081774183s] Feb 5 21:09:44.043: INFO: Created: latency-svc-26d4s Feb 5 21:09:44.096: INFO: Got endpoints: latency-svc-26d4s [2.216781544s] Feb 5 21:09:44.105: INFO: Created: latency-svc-zcsq5 Feb 5 21:09:44.127: INFO: Got endpoints: latency-svc-zcsq5 [2.100575986s] Feb 5 21:09:44.217: INFO: Created: latency-svc-54njr Feb 5 21:09:44.229: INFO: Got endpoints: latency-svc-54njr [2.154188421s] Feb 5 21:09:44.414: INFO: Created: latency-svc-hjstm Feb 5 21:09:44.474: INFO: Got endpoints: latency-svc-hjstm [2.304932656s] Feb 5 21:09:44.531: INFO: Created: latency-svc-xp5dq Feb 5 21:09:44.541: INFO: Got endpoints: latency-svc-xp5dq [2.277917924s] Feb 5 21:09:44.568: INFO: Created: latency-svc-pf726 Feb 5 21:09:44.575: INFO: Got endpoints: latency-svc-pf726 [2.25757581s] Feb 5 21:09:44.602: INFO: Created: latency-svc-gkgxt Feb 5 21:09:44.618: INFO: Got endpoints: latency-svc-gkgxt [2.276019185s] Feb 5 21:09:44.718: INFO: Created: latency-svc-nfpfg Feb 5 21:09:44.737: INFO: Got endpoints: latency-svc-nfpfg [2.35996754s] Feb 5 21:09:44.759: INFO: Created: latency-svc-c8j28 Feb 5 21:09:44.821: INFO: Created: latency-svc-9wslh Feb 5 21:09:44.867: INFO: Got endpoints: latency-svc-c8j28 [2.453293347s] Feb 5 21:09:44.877: INFO: Got endpoints: latency-svc-9wslh [2.331418511s] Feb 5 21:09:44.893: INFO: Created: latency-svc-cmhkb Feb 5 21:09:44.925: INFO: Got endpoints: latency-svc-cmhkb [2.212495733s] Feb 5 21:09:44.949: INFO: Created: latency-svc-khwvf Feb 5 21:09:44.955: INFO: Got endpoints: latency-svc-khwvf [2.204946531s] Feb 5 21:09:45.082: INFO: Created: latency-svc-bgcxn Feb 5 21:09:45.111: INFO: Got endpoints: latency-svc-bgcxn [2.251615451s] Feb 5 21:09:45.114: INFO: Created: latency-svc-7vglk Feb 5 21:09:45.120: INFO: Got endpoints: latency-svc-7vglk [2.213649927s] Feb 5 21:09:45.149: INFO: Created: latency-svc-w42ds Feb 5 21:09:45.157: INFO: Got endpoints: latency-svc-w42ds [2.245186086s] Feb 5 21:09:45.185: INFO: Created: latency-svc-gt9gw Feb 5 21:09:45.250: INFO: Got endpoints: latency-svc-gt9gw [1.15308961s] Feb 5 21:09:45.282: INFO: Created: latency-svc-4cw6n Feb 5 21:09:45.312: INFO: Got endpoints: latency-svc-4cw6n [155.603331ms] Feb 5 21:09:45.331: INFO: Created: latency-svc-mv9cz Feb 5 21:09:45.335: INFO: Got endpoints: latency-svc-mv9cz [1.207490019s] Feb 5 21:09:45.420: INFO: Created: latency-svc-h8q76 Feb 5 21:09:45.423: INFO: Got endpoints: latency-svc-h8q76 [1.194684031s] Feb 5 21:09:45.465: INFO: Created: latency-svc-6gl2g Feb 5 21:09:45.466: INFO: Got endpoints: latency-svc-6gl2g [991.166896ms] Feb 5 21:09:45.491: INFO: Created: latency-svc-s5m8m Feb 5 21:09:45.492: INFO: Got endpoints: latency-svc-s5m8m [950.613209ms] Feb 5 21:09:45.571: INFO: Created: latency-svc-8sb4p Feb 5 21:09:45.571: INFO: Got endpoints: latency-svc-8sb4p [995.464235ms] Feb 5 21:09:45.621: INFO: Created: latency-svc-jrw44 Feb 5 21:09:45.630: INFO: Got endpoints: latency-svc-jrw44 [1.012088784s] Feb 5 21:09:45.789: INFO: Created: latency-svc-qftbl Feb 5 21:09:45.795: INFO: Got endpoints: latency-svc-qftbl [1.058438225s] Feb 5 21:09:45.855: INFO: Created: latency-svc-b2btq Feb 5 21:09:45.864: INFO: Got endpoints: latency-svc-b2btq [996.336679ms] Feb 5 21:09:45.946: INFO: Created: latency-svc-g4vw9 Feb 5 21:09:45.972: INFO: Got endpoints: latency-svc-g4vw9 [1.093651816s] Feb 5 21:09:45.974: INFO: Created: latency-svc-f8ddv Feb 5 21:09:45.986: INFO: Got endpoints: latency-svc-f8ddv [1.06071879s] Feb 5 21:09:46.024: INFO: Created: latency-svc-zf5mt Feb 5 21:09:46.033: INFO: Got endpoints: latency-svc-zf5mt [1.077908202s] Feb 5 21:09:46.117: INFO: Created: latency-svc-pkrgr Feb 5 21:09:46.122: INFO: Got endpoints: latency-svc-pkrgr [1.011046739s] Feb 5 21:09:46.142: INFO: Created: latency-svc-zt62g Feb 5 21:09:46.147: INFO: Got endpoints: latency-svc-zt62g [1.027333797s] Feb 5 21:09:46.162: INFO: Created: latency-svc-pkjpx Feb 5 21:09:46.182: INFO: Got endpoints: latency-svc-pkjpx [931.891981ms] Feb 5 21:09:46.301: INFO: Created: latency-svc-pshlk Feb 5 21:09:46.343: INFO: Got endpoints: latency-svc-pshlk [1.029983633s] Feb 5 21:09:46.346: INFO: Created: latency-svc-chpvx Feb 5 21:09:46.356: INFO: Got endpoints: latency-svc-chpvx [1.020278764s] Feb 5 21:09:46.386: INFO: Created: latency-svc-qk8v4 Feb 5 21:09:46.452: INFO: Got endpoints: latency-svc-qk8v4 [1.027976101s] Feb 5 21:09:46.473: INFO: Created: latency-svc-dbxdc Feb 5 21:09:46.476: INFO: Got endpoints: latency-svc-dbxdc [1.009993654s] Feb 5 21:09:46.502: INFO: Created: latency-svc-dw6l9 Feb 5 21:09:46.513: INFO: Got endpoints: latency-svc-dw6l9 [1.02055125s] Feb 5 21:09:46.537: INFO: Created: latency-svc-mzd9w Feb 5 21:09:46.593: INFO: Got endpoints: latency-svc-mzd9w [1.022596389s] Feb 5 21:09:46.632: INFO: Created: latency-svc-5whnv Feb 5 21:09:46.649: INFO: Got endpoints: latency-svc-5whnv [1.018075747s] Feb 5 21:09:46.683: INFO: Created: latency-svc-ggckn Feb 5 21:09:46.685: INFO: Got endpoints: latency-svc-ggckn [889.953814ms] Feb 5 21:09:46.771: INFO: Created: latency-svc-2gkkf Feb 5 21:09:46.774: INFO: Got endpoints: latency-svc-2gkkf [909.402581ms] Feb 5 21:09:46.797: INFO: Created: latency-svc-q869x Feb 5 21:09:46.824: INFO: Got endpoints: latency-svc-q869x [852.238709ms] Feb 5 21:09:46.845: INFO: Created: latency-svc-z72jc Feb 5 21:09:46.976: INFO: Got endpoints: latency-svc-z72jc [990.045252ms] Feb 5 21:09:46.986: INFO: Created: latency-svc-h5vww Feb 5 21:09:46.991: INFO: Got endpoints: latency-svc-h5vww [958.249616ms] Feb 5 21:09:47.028: INFO: Created: latency-svc-7pgf8 Feb 5 21:09:47.031: INFO: Got endpoints: latency-svc-7pgf8 [908.997736ms] Feb 5 21:09:47.068: INFO: Created: latency-svc-4tsz8 Feb 5 21:09:47.138: INFO: Got endpoints: latency-svc-4tsz8 [990.073106ms] Feb 5 21:09:47.178: INFO: Created: latency-svc-bgbj4 Feb 5 21:09:47.182: INFO: Got endpoints: latency-svc-bgbj4 [999.231065ms] Feb 5 21:09:47.214: INFO: Created: latency-svc-zlht2 Feb 5 21:09:47.231: INFO: Got endpoints: latency-svc-zlht2 [887.74748ms] Feb 5 21:09:47.299: INFO: Created: latency-svc-99tqw Feb 5 21:09:47.313: INFO: Got endpoints: latency-svc-99tqw [956.54861ms] Feb 5 21:09:47.368: INFO: Created: latency-svc-r8jxz Feb 5 21:09:47.381: INFO: Got endpoints: latency-svc-r8jxz [929.119304ms] Feb 5 21:09:47.401: INFO: Created: latency-svc-4mtt7 Feb 5 21:09:47.451: INFO: Got endpoints: latency-svc-4mtt7 [975.165229ms] Feb 5 21:09:47.461: INFO: Created: latency-svc-krbd8 Feb 5 21:09:47.467: INFO: Got endpoints: latency-svc-krbd8 [954.424234ms] Feb 5 21:09:47.488: INFO: Created: latency-svc-2zrh2 Feb 5 21:09:47.500: INFO: Got endpoints: latency-svc-2zrh2 [905.887486ms] Feb 5 21:09:47.526: INFO: Created: latency-svc-gxbxg Feb 5 21:09:47.529: INFO: Got endpoints: latency-svc-gxbxg [879.824041ms] Feb 5 21:09:47.546: INFO: Created: latency-svc-6k8t8 Feb 5 21:09:47.603: INFO: Created: latency-svc-899qj Feb 5 21:09:47.603: INFO: Got endpoints: latency-svc-6k8t8 [917.492761ms] Feb 5 21:09:47.613: INFO: Got endpoints: latency-svc-899qj [838.561422ms] Feb 5 21:09:47.645: INFO: Created: latency-svc-jqtt9 Feb 5 21:09:47.657: INFO: Got endpoints: latency-svc-jqtt9 [832.995594ms] Feb 5 21:09:47.679: INFO: Created: latency-svc-czcmb Feb 5 21:09:47.692: INFO: Got endpoints: latency-svc-czcmb [715.461903ms] Feb 5 21:09:47.766: INFO: Created: latency-svc-qfkbw Feb 5 21:09:47.789: INFO: Got endpoints: latency-svc-qfkbw [798.069352ms] Feb 5 21:09:47.806: INFO: Created: latency-svc-rmdh9 Feb 5 21:09:47.808: INFO: Got endpoints: latency-svc-rmdh9 [776.609085ms] Feb 5 21:09:47.859: INFO: Created: latency-svc-q6fcs Feb 5 21:09:47.941: INFO: Created: latency-svc-bxqq8 Feb 5 21:09:47.941: INFO: Got endpoints: latency-svc-q6fcs [803.297446ms] Feb 5 21:09:47.970: INFO: Got endpoints: latency-svc-bxqq8 [788.483926ms] Feb 5 21:09:47.972: INFO: Created: latency-svc-mcz7h Feb 5 21:09:47.984: INFO: Got endpoints: latency-svc-mcz7h [753.079191ms] Feb 5 21:09:48.018: INFO: Created: latency-svc-2vx8t Feb 5 21:09:48.136: INFO: Got endpoints: latency-svc-2vx8t [822.948356ms] Feb 5 21:09:48.148: INFO: Created: latency-svc-5xwz6 Feb 5 21:09:48.163: INFO: Got endpoints: latency-svc-5xwz6 [782.365805ms] Feb 5 21:09:48.222: INFO: Created: latency-svc-6qb76 Feb 5 21:09:48.316: INFO: Got endpoints: latency-svc-6qb76 [864.520742ms] Feb 5 21:09:48.327: INFO: Created: latency-svc-zzqj7 Feb 5 21:09:48.338: INFO: Got endpoints: latency-svc-zzqj7 [870.149327ms] Feb 5 21:09:48.368: INFO: Created: latency-svc-gmqcr Feb 5 21:09:48.398: INFO: Got endpoints: latency-svc-gmqcr [898.026578ms] Feb 5 21:09:48.400: INFO: Created: latency-svc-8b2xt Feb 5 21:09:48.412: INFO: Got endpoints: latency-svc-8b2xt [883.139791ms] Feb 5 21:09:48.497: INFO: Created: latency-svc-znvjp Feb 5 21:09:48.510: INFO: Got endpoints: latency-svc-znvjp [906.867481ms] Feb 5 21:09:48.537: INFO: Created: latency-svc-hs7kj Feb 5 21:09:48.558: INFO: Got endpoints: latency-svc-hs7kj [945.287508ms] Feb 5 21:09:48.559: INFO: Created: latency-svc-h5twr Feb 5 21:09:48.593: INFO: Got endpoints: latency-svc-h5twr [936.009152ms] Feb 5 21:09:48.661: INFO: Created: latency-svc-xstgc Feb 5 21:09:48.668: INFO: Got endpoints: latency-svc-xstgc [975.321414ms] Feb 5 21:09:48.722: INFO: Created: latency-svc-x4tzr Feb 5 21:09:48.728: INFO: Got endpoints: latency-svc-x4tzr [938.899392ms] Feb 5 21:09:48.758: INFO: Created: latency-svc-m56f7 Feb 5 21:09:48.845: INFO: Got endpoints: latency-svc-m56f7 [1.03652824s] Feb 5 21:09:48.850: INFO: Created: latency-svc-fg76d Feb 5 21:09:48.862: INFO: Got endpoints: latency-svc-fg76d [920.164093ms] Feb 5 21:09:48.880: INFO: Created: latency-svc-7cf64 Feb 5 21:09:48.885: INFO: Got endpoints: latency-svc-7cf64 [914.223071ms] Feb 5 21:09:48.948: INFO: Created: latency-svc-qfcvs Feb 5 21:09:49.018: INFO: Got endpoints: latency-svc-qfcvs [1.033898451s] Feb 5 21:09:49.083: INFO: Created: latency-svc-bq57t Feb 5 21:09:49.090: INFO: Got endpoints: latency-svc-bq57t [953.676541ms] Feb 5 21:09:49.179: INFO: Created: latency-svc-wx55f Feb 5 21:09:49.184: INFO: Got endpoints: latency-svc-wx55f [1.020185051s] Feb 5 21:09:49.246: INFO: Created: latency-svc-qgzql Feb 5 21:09:49.262: INFO: Got endpoints: latency-svc-qgzql [945.397945ms] Feb 5 21:09:49.355: INFO: Created: latency-svc-nmpqn Feb 5 21:09:49.381: INFO: Got endpoints: latency-svc-nmpqn [1.043008275s] Feb 5 21:09:49.384: INFO: Created: latency-svc-2zl7g Feb 5 21:09:49.385: INFO: Got endpoints: latency-svc-2zl7g [987.046258ms] Feb 5 21:09:49.413: INFO: Created: latency-svc-9rgjf Feb 5 21:09:49.415: INFO: Got endpoints: latency-svc-9rgjf [1.002552703s] Feb 5 21:09:49.439: INFO: Created: latency-svc-rq6bl Feb 5 21:09:49.446: INFO: Got endpoints: latency-svc-rq6bl [936.012474ms] Feb 5 21:09:49.513: INFO: Created: latency-svc-tk5rh Feb 5 21:09:49.526: INFO: Got endpoints: latency-svc-tk5rh [967.498028ms] Feb 5 21:09:49.546: INFO: Created: latency-svc-q5cb8 Feb 5 21:09:49.549: INFO: Got endpoints: latency-svc-q5cb8 [955.94145ms] Feb 5 21:09:49.572: INFO: Created: latency-svc-54v89 Feb 5 21:09:49.577: INFO: Got endpoints: latency-svc-54v89 [909.571333ms] Feb 5 21:09:49.602: INFO: Created: latency-svc-6pzw7 Feb 5 21:09:49.609: INFO: Got endpoints: latency-svc-6pzw7 [880.505657ms] Feb 5 21:09:49.687: INFO: Created: latency-svc-zmfq9 Feb 5 21:09:49.712: INFO: Got endpoints: latency-svc-zmfq9 [866.801417ms] Feb 5 21:09:49.736: INFO: Created: latency-svc-xddg4 Feb 5 21:09:49.761: INFO: Created: latency-svc-cz97n Feb 5 21:09:49.761: INFO: Got endpoints: latency-svc-xddg4 [898.862098ms] Feb 5 21:09:49.814: INFO: Got endpoints: latency-svc-cz97n [928.829236ms] Feb 5 21:09:49.827: INFO: Created: latency-svc-75brg Feb 5 21:09:49.832: INFO: Got endpoints: latency-svc-75brg [813.583261ms] Feb 5 21:09:49.861: INFO: Created: latency-svc-7kd5q Feb 5 21:09:49.891: INFO: Got endpoints: latency-svc-7kd5q [801.457595ms] Feb 5 21:09:49.982: INFO: Created: latency-svc-6vl8j Feb 5 21:09:49.992: INFO: Got endpoints: latency-svc-6vl8j [808.419789ms] Feb 5 21:09:50.010: INFO: Created: latency-svc-m8flh Feb 5 21:09:50.023: INFO: Got endpoints: latency-svc-m8flh [761.301877ms] Feb 5 21:09:50.054: INFO: Created: latency-svc-2j525 Feb 5 21:09:50.054: INFO: Got endpoints: latency-svc-2j525 [673.352044ms] Feb 5 21:09:50.080: INFO: Created: latency-svc-z95d2 Feb 5 21:09:50.189: INFO: Got endpoints: latency-svc-z95d2 [803.947553ms] Feb 5 21:09:50.217: INFO: Created: latency-svc-whbmp Feb 5 21:09:50.245: INFO: Got endpoints: latency-svc-whbmp [830.272283ms] Feb 5 21:09:50.403: INFO: Created: latency-svc-6vb6p Feb 5 21:09:50.408: INFO: Got endpoints: latency-svc-6vb6p [961.462622ms] Feb 5 21:09:50.439: INFO: Created: latency-svc-dd6hx Feb 5 21:09:50.447: INFO: Got endpoints: latency-svc-dd6hx [920.967875ms] Feb 5 21:09:50.468: INFO: Created: latency-svc-qwggm Feb 5 21:09:50.486: INFO: Got endpoints: latency-svc-qwggm [936.661951ms] Feb 5 21:09:50.571: INFO: Created: latency-svc-79kgv Feb 5 21:09:50.602: INFO: Got endpoints: latency-svc-79kgv [1.025037557s] Feb 5 21:09:50.607: INFO: Created: latency-svc-5k947 Feb 5 21:09:50.612: INFO: Got endpoints: latency-svc-5k947 [1.002535109s] Feb 5 21:09:50.632: INFO: Created: latency-svc-d8kjf Feb 5 21:09:50.639: INFO: Got endpoints: latency-svc-d8kjf [927.32858ms] Feb 5 21:09:50.734: INFO: Created: latency-svc-q7sj7 Feb 5 21:09:50.761: INFO: Got endpoints: latency-svc-q7sj7 [1.000334893s] Feb 5 21:09:50.790: INFO: Created: latency-svc-zvfjj Feb 5 21:09:50.804: INFO: Got endpoints: latency-svc-zvfjj [990.0016ms] Feb 5 21:09:50.905: INFO: Created: latency-svc-2b4rw Feb 5 21:09:50.929: INFO: Got endpoints: latency-svc-2b4rw [1.097294005s] Feb 5 21:09:50.963: INFO: Created: latency-svc-dhmnz Feb 5 21:09:50.986: INFO: Got endpoints: latency-svc-dhmnz [1.094416086s] Feb 5 21:09:50.988: INFO: Created: latency-svc-6tv24 Feb 5 21:09:51.131: INFO: Got endpoints: latency-svc-6tv24 [1.138801783s] Feb 5 21:09:51.137: INFO: Created: latency-svc-n8llj Feb 5 21:09:51.143: INFO: Got endpoints: latency-svc-n8llj [1.120176423s] Feb 5 21:09:51.219: INFO: Created: latency-svc-6dpnp Feb 5 21:09:51.229: INFO: Got endpoints: latency-svc-6dpnp [1.175004694s] Feb 5 21:09:51.378: INFO: Created: latency-svc-q4b22 Feb 5 21:09:51.382: INFO: Got endpoints: latency-svc-q4b22 [1.192454664s] Feb 5 21:09:51.426: INFO: Created: latency-svc-hb6mc Feb 5 21:09:51.442: INFO: Got endpoints: latency-svc-hb6mc [1.19668447s] Feb 5 21:09:51.549: INFO: Created: latency-svc-bkgrf Feb 5 21:09:51.581: INFO: Got endpoints: latency-svc-bkgrf [1.172799075s] Feb 5 21:09:51.630: INFO: Created: latency-svc-44psv Feb 5 21:09:51.638: INFO: Got endpoints: latency-svc-44psv [1.191175619s] Feb 5 21:09:51.773: INFO: Created: latency-svc-z42lt Feb 5 21:09:51.792: INFO: Got endpoints: latency-svc-z42lt [1.305718049s] Feb 5 21:09:51.837: INFO: Created: latency-svc-7j4d8 Feb 5 21:09:51.845: INFO: Got endpoints: latency-svc-7j4d8 [1.242304578s] Feb 5 21:09:51.960: INFO: Created: latency-svc-p446f Feb 5 21:09:51.985: INFO: Got endpoints: latency-svc-p446f [1.372945409s] Feb 5 21:09:52.014: INFO: Created: latency-svc-jvmjn Feb 5 21:09:52.028: INFO: Got endpoints: latency-svc-jvmjn [1.38913291s] Feb 5 21:09:52.056: INFO: Created: latency-svc-8xbbn Feb 5 21:09:52.195: INFO: Got endpoints: latency-svc-8xbbn [1.434162544s] Feb 5 21:09:52.206: INFO: Created: latency-svc-bhk8s Feb 5 21:09:52.218: INFO: Got endpoints: latency-svc-bhk8s [1.413706882s] Feb 5 21:09:52.245: INFO: Created: latency-svc-77qdr Feb 5 21:09:52.252: INFO: Got endpoints: latency-svc-77qdr [1.32240707s] Feb 5 21:09:52.285: INFO: Created: latency-svc-86ncr Feb 5 21:09:52.289: INFO: Got endpoints: latency-svc-86ncr [1.303322897s] Feb 5 21:09:52.367: INFO: Created: latency-svc-t4pkc Feb 5 21:09:52.377: INFO: Got endpoints: latency-svc-t4pkc [1.245208769s] Feb 5 21:09:52.411: INFO: Created: latency-svc-z7jcd Feb 5 21:09:52.423: INFO: Got endpoints: latency-svc-z7jcd [1.279439232s] Feb 5 21:09:52.513: INFO: Created: latency-svc-l6wnm Feb 5 21:09:52.547: INFO: Got endpoints: latency-svc-l6wnm [1.317320859s] Feb 5 21:09:52.566: INFO: Created: latency-svc-jb884 Feb 5 21:09:52.577: INFO: Got endpoints: latency-svc-jb884 [1.194686435s] Feb 5 21:09:52.662: INFO: Created: latency-svc-vfsds Feb 5 21:09:52.665: INFO: Got endpoints: latency-svc-vfsds [1.223337583s] Feb 5 21:09:52.695: INFO: Created: latency-svc-kh9vl Feb 5 21:09:52.711: INFO: Created: latency-svc-2lc25 Feb 5 21:09:52.716: INFO: Got endpoints: latency-svc-kh9vl [1.135568004s] Feb 5 21:09:52.731: INFO: Got endpoints: latency-svc-2lc25 [1.092249778s] Feb 5 21:09:52.755: INFO: Created: latency-svc-97dm2 Feb 5 21:09:52.813: INFO: Got endpoints: latency-svc-97dm2 [1.020396874s] Feb 5 21:09:52.826: INFO: Created: latency-svc-ztgvf Feb 5 21:09:52.829: INFO: Got endpoints: latency-svc-ztgvf [984.565896ms] Feb 5 21:09:52.855: INFO: Created: latency-svc-flkhr Feb 5 21:09:52.860: INFO: Got endpoints: latency-svc-flkhr [875.325111ms] Feb 5 21:09:52.881: INFO: Created: latency-svc-gxdj4 Feb 5 21:09:52.895: INFO: Got endpoints: latency-svc-gxdj4 [866.638977ms] Feb 5 21:09:52.904: INFO: Created: latency-svc-zjdc2 Feb 5 21:09:52.908: INFO: Got endpoints: latency-svc-zjdc2 [712.510696ms] Feb 5 21:09:52.986: INFO: Created: latency-svc-fwwp7 Feb 5 21:09:53.009: INFO: Got endpoints: latency-svc-fwwp7 [791.122622ms] Feb 5 21:09:53.063: INFO: Created: latency-svc-68kwc Feb 5 21:09:53.077: INFO: Got endpoints: latency-svc-68kwc [824.755618ms] Feb 5 21:09:53.193: INFO: Created: latency-svc-xd6vv Feb 5 21:09:53.203: INFO: Got endpoints: latency-svc-xd6vv [913.446995ms] Feb 5 21:09:53.246: INFO: Created: latency-svc-zlr8l Feb 5 21:09:53.257: INFO: Got endpoints: latency-svc-zlr8l [879.959863ms] Feb 5 21:09:53.273: INFO: Created: latency-svc-9pjzn Feb 5 21:09:53.284: INFO: Got endpoints: latency-svc-9pjzn [860.857357ms] Feb 5 21:09:53.327: INFO: Created: latency-svc-2hg86 Feb 5 21:09:53.338: INFO: Got endpoints: latency-svc-2hg86 [790.326696ms] Feb 5 21:09:53.356: INFO: Created: latency-svc-mcjtp Feb 5 21:09:53.360: INFO: Got endpoints: latency-svc-mcjtp [782.814002ms] Feb 5 21:09:53.373: INFO: Created: latency-svc-2fqqp Feb 5 21:09:53.379: INFO: Got endpoints: latency-svc-2fqqp [713.544464ms] Feb 5 21:09:53.402: INFO: Created: latency-svc-ln7f7 Feb 5 21:09:53.466: INFO: Got endpoints: latency-svc-ln7f7 [750.003735ms] Feb 5 21:09:53.481: INFO: Created: latency-svc-t8dqw Feb 5 21:09:53.498: INFO: Got endpoints: latency-svc-t8dqw [767.369689ms] Feb 5 21:09:53.528: INFO: Created: latency-svc-nglqb Feb 5 21:09:53.544: INFO: Got endpoints: latency-svc-nglqb [731.446321ms] Feb 5 21:09:53.566: INFO: Created: latency-svc-9qv4b Feb 5 21:09:53.622: INFO: Got endpoints: latency-svc-9qv4b [792.600256ms] Feb 5 21:09:53.628: INFO: Created: latency-svc-98ltb Feb 5 21:09:53.663: INFO: Created: latency-svc-8vm79 Feb 5 21:09:53.663: INFO: Got endpoints: latency-svc-98ltb [802.661386ms] Feb 5 21:09:53.696: INFO: Got endpoints: latency-svc-8vm79 [800.220127ms] Feb 5 21:09:53.698: INFO: Created: latency-svc-rtfp2 Feb 5 21:09:53.711: INFO: Got endpoints: latency-svc-rtfp2 [803.151328ms] Feb 5 21:09:53.787: INFO: Created: latency-svc-wzn2n Feb 5 21:09:53.791: INFO: Got endpoints: latency-svc-wzn2n [782.163423ms] Feb 5 21:09:53.829: INFO: Created: latency-svc-bllck Feb 5 21:09:53.835: INFO: Got endpoints: latency-svc-bllck [758.523282ms] Feb 5 21:09:53.874: INFO: Created: latency-svc-8mrpd Feb 5 21:09:53.939: INFO: Got endpoints: latency-svc-8mrpd [736.161395ms] Feb 5 21:09:53.950: INFO: Created: latency-svc-hlhck Feb 5 21:09:53.985: INFO: Got endpoints: latency-svc-hlhck [728.634837ms] Feb 5 21:09:54.031: INFO: Created: latency-svc-vpttv Feb 5 21:09:54.095: INFO: Created: latency-svc-l5nnz Feb 5 21:09:54.096: INFO: Got endpoints: latency-svc-vpttv [811.616218ms] Feb 5 21:09:54.149: INFO: Created: latency-svc-l9fmt Feb 5 21:09:54.150: INFO: Got endpoints: latency-svc-l5nnz [812.317325ms] Feb 5 21:09:54.165: INFO: Got endpoints: latency-svc-l9fmt [805.843827ms] Feb 5 21:09:54.301: INFO: Created: latency-svc-lzr6b Feb 5 21:09:54.342: INFO: Created: latency-svc-48kc9 Feb 5 21:09:54.342: INFO: Got endpoints: latency-svc-lzr6b [962.417122ms] Feb 5 21:09:54.347: INFO: Got endpoints: latency-svc-48kc9 [880.270901ms] Feb 5 21:09:54.486: INFO: Created: latency-svc-qkjk6 Feb 5 21:09:54.520: INFO: Created: latency-svc-6rrj6 Feb 5 21:09:54.525: INFO: Got endpoints: latency-svc-qkjk6 [1.026402468s] Feb 5 21:09:54.568: INFO: Got endpoints: latency-svc-6rrj6 [1.023729672s] Feb 5 21:09:54.629: INFO: Created: latency-svc-k7bd7 Feb 5 21:09:54.639: INFO: Got endpoints: latency-svc-k7bd7 [1.016327779s] Feb 5 21:09:54.659: INFO: Created: latency-svc-75qlq Feb 5 21:09:54.661: INFO: Got endpoints: latency-svc-75qlq [997.733672ms] Feb 5 21:09:54.687: INFO: Created: latency-svc-fnfp5 Feb 5 21:09:54.717: INFO: Created: latency-svc-bmhnc Feb 5 21:09:54.717: INFO: Got endpoints: latency-svc-fnfp5 [1.021742201s] Feb 5 21:09:54.775: INFO: Got endpoints: latency-svc-bmhnc [1.064029582s] Feb 5 21:09:54.777: INFO: Created: latency-svc-xqltl Feb 5 21:09:54.787: INFO: Got endpoints: latency-svc-xqltl [995.426983ms] Feb 5 21:09:54.806: INFO: Created: latency-svc-hz8zg Feb 5 21:09:54.826: INFO: Got endpoints: latency-svc-hz8zg [990.19534ms] Feb 5 21:09:54.862: INFO: Created: latency-svc-zkgps Feb 5 21:09:54.947: INFO: Created: latency-svc-dr8j9 Feb 5 21:09:54.952: INFO: Got endpoints: latency-svc-zkgps [1.011994644s] Feb 5 21:09:54.991: INFO: Got endpoints: latency-svc-dr8j9 [1.005403291s] Feb 5 21:09:54.993: INFO: Created: latency-svc-llxc5 Feb 5 21:09:54.997: INFO: Got endpoints: latency-svc-llxc5 [901.176016ms] Feb 5 21:09:55.121: INFO: Created: latency-svc-ngw5r Feb 5 21:09:55.121: INFO: Got endpoints: latency-svc-ngw5r [970.965163ms] Feb 5 21:09:55.167: INFO: Created: latency-svc-ktkng Feb 5 21:09:55.170: INFO: Got endpoints: latency-svc-ktkng [1.004753392s] Feb 5 21:09:55.206: INFO: Created: latency-svc-vtx85 Feb 5 21:09:55.310: INFO: Got endpoints: latency-svc-vtx85 [968.287322ms] Feb 5 21:09:55.317: INFO: Created: latency-svc-9f8xf Feb 5 21:09:55.327: INFO: Got endpoints: latency-svc-9f8xf [980.218513ms] Feb 5 21:09:55.365: INFO: Created: latency-svc-4drh2 Feb 5 21:09:55.370: INFO: Got endpoints: latency-svc-4drh2 [844.501579ms] Feb 5 21:09:55.400: INFO: Created: latency-svc-2d8rp Feb 5 21:09:55.400: INFO: Got endpoints: latency-svc-2d8rp [831.346895ms] Feb 5 21:09:55.457: INFO: Created: latency-svc-wlxnc Feb 5 21:09:55.458: INFO: Got endpoints: latency-svc-wlxnc [818.60978ms] Feb 5 21:09:55.458: INFO: Latencies: [46.926779ms 76.604748ms 155.603331ms 215.819402ms 404.628726ms 441.690242ms 556.228797ms 594.170599ms 673.352044ms 712.510696ms 713.544464ms 715.461903ms 728.634837ms 731.446321ms 736.161395ms 750.003735ms 753.079191ms 758.523282ms 761.301877ms 766.885522ms 767.369689ms 776.609085ms 782.163423ms 782.365805ms 782.814002ms 788.483926ms 790.326696ms 791.122622ms 792.600256ms 798.069352ms 800.220127ms 801.457595ms 802.661386ms 803.151328ms 803.297446ms 803.947553ms 805.843827ms 808.419789ms 811.616218ms 812.317325ms 813.583261ms 817.157506ms 818.60978ms 822.948356ms 824.755618ms 830.272283ms 831.346895ms 832.995594ms 838.561422ms 844.501579ms 852.238709ms 860.857357ms 864.520742ms 866.638977ms 866.801417ms 870.149327ms 875.325111ms 879.824041ms 879.959863ms 880.270901ms 880.505657ms 883.139791ms 887.74748ms 889.953814ms 898.026578ms 898.862098ms 901.176016ms 905.887486ms 906.867481ms 908.997736ms 909.402581ms 909.571333ms 913.446995ms 914.223071ms 917.492761ms 920.164093ms 920.967875ms 927.32858ms 928.829236ms 929.119304ms 931.891981ms 936.009152ms 936.012474ms 936.661951ms 938.899392ms 945.287508ms 945.397945ms 950.613209ms 953.676541ms 954.424234ms 955.94145ms 956.54861ms 958.249616ms 961.462622ms 962.417122ms 964.239782ms 967.498028ms 968.287322ms 970.965163ms 975.165229ms 975.321414ms 980.218513ms 984.565896ms 987.046258ms 990.0016ms 990.045252ms 990.073106ms 990.19534ms 991.166896ms 995.426983ms 995.464235ms 996.336679ms 997.733672ms 999.231065ms 1.000334893s 1.002535109s 1.002552703s 1.004753392s 1.005403291s 1.009993654s 1.011046739s 1.011764858s 1.011994644s 1.012088784s 1.016327779s 1.018075747s 1.020185051s 1.020278764s 1.020396874s 1.02055125s 1.021742201s 1.022596389s 1.023729672s 1.025037557s 1.026402468s 1.027333797s 1.027976101s 1.029983633s 1.033898451s 1.03652824s 1.043008275s 1.058438225s 1.06071879s 1.064029582s 1.077908202s 1.081774183s 1.092249778s 1.093651816s 1.094416086s 1.097294005s 1.107411868s 1.120176423s 1.135568004s 1.138801783s 1.15308961s 1.172799075s 1.175004694s 1.191175619s 1.192454664s 1.194684031s 1.194686435s 1.19668447s 1.201406655s 1.207490019s 1.223337583s 1.240409658s 1.242304578s 1.245208769s 1.245739419s 1.245980526s 1.250154074s 1.256019807s 1.267251587s 1.267776616s 1.274924205s 1.279439232s 1.281336225s 1.303322897s 1.305718049s 1.317320859s 1.32240707s 1.372945409s 1.38913291s 1.413706882s 1.434162544s 2.100575986s 2.154188421s 2.204946531s 2.212495733s 2.213649927s 2.216781544s 2.245186086s 2.251615451s 2.25757581s 2.276019185s 2.277917924s 2.304932656s 2.331418511s 2.35996754s 2.453293347s] Feb 5 21:09:55.458: INFO: 50 %ile: 975.321414ms Feb 5 21:09:55.458: INFO: 90 %ile: 1.32240707s Feb 5 21:09:55.458: INFO: 99 %ile: 2.35996754s Feb 5 21:09:55.458: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:09:55.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1399" for this suite. • [SLOW TEST:22.855 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":6,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:09:55.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:10:01.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4374" for this suite. • [SLOW TEST:6.227 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":142,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:10:01.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Feb 5 21:10:28.154: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:28.154: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:28.199273 9 log.go:172] (0xc0014b3a20) (0xc0027bdf40) Create stream I0205 21:10:28.199395 9 log.go:172] (0xc0014b3a20) (0xc0027bdf40) Stream added, broadcasting: 1 I0205 21:10:28.203918 9 log.go:172] (0xc0014b3a20) Reply frame received for 1 I0205 21:10:28.203971 9 log.go:172] (0xc0014b3a20) (0xc001dfc780) Create stream I0205 21:10:28.203978 9 log.go:172] (0xc0014b3a20) (0xc001dfc780) Stream added, broadcasting: 3 I0205 21:10:28.208464 9 log.go:172] (0xc0014b3a20) Reply frame received for 3 I0205 21:10:28.208585 9 log.go:172] (0xc0014b3a20) (0xc001ee0000) Create stream I0205 21:10:28.208631 9 log.go:172] (0xc0014b3a20) (0xc001ee0000) Stream added, broadcasting: 5 I0205 21:10:28.210467 9 log.go:172] (0xc0014b3a20) Reply frame received for 5 I0205 21:10:28.281666 9 log.go:172] (0xc0014b3a20) Data frame received for 3 I0205 21:10:28.281740 9 log.go:172] (0xc001dfc780) (3) Data frame handling I0205 21:10:28.281762 9 log.go:172] (0xc001dfc780) (3) Data frame sent I0205 21:10:28.364610 9 log.go:172] (0xc0014b3a20) (0xc001dfc780) Stream removed, broadcasting: 3 I0205 21:10:28.364975 9 log.go:172] (0xc0014b3a20) (0xc001ee0000) Stream removed, broadcasting: 5 I0205 21:10:28.365046 9 log.go:172] (0xc0014b3a20) Data frame received for 1 I0205 21:10:28.365074 9 log.go:172] (0xc0027bdf40) (1) Data frame handling I0205 21:10:28.365110 9 log.go:172] (0xc0027bdf40) (1) Data frame sent I0205 21:10:28.365139 9 log.go:172] (0xc0014b3a20) (0xc0027bdf40) Stream removed, broadcasting: 1 I0205 21:10:28.365364 9 log.go:172] (0xc0014b3a20) Go away received I0205 21:10:28.366170 9 log.go:172] (0xc0014b3a20) (0xc0027bdf40) Stream removed, broadcasting: 1 I0205 21:10:28.366198 9 log.go:172] (0xc0014b3a20) (0xc001dfc780) Stream removed, broadcasting: 3 I0205 21:10:28.366221 9 log.go:172] (0xc0014b3a20) (0xc001ee0000) Stream removed, broadcasting: 5 Feb 5 21:10:28.366: INFO: Exec stderr: "" Feb 5 21:10:28.366: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:28.366: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:28.438096 9 log.go:172] (0xc0015334a0) (0xc002266fa0) Create stream I0205 21:10:28.438326 9 log.go:172] (0xc0015334a0) (0xc002266fa0) Stream added, broadcasting: 1 I0205 21:10:28.443971 9 log.go:172] (0xc0015334a0) Reply frame received for 1 I0205 21:10:28.444025 9 log.go:172] (0xc0015334a0) (0xc001ee00a0) Create stream I0205 21:10:28.444078 9 log.go:172] (0xc0015334a0) (0xc001ee00a0) Stream added, broadcasting: 3 I0205 21:10:28.446878 9 log.go:172] (0xc0015334a0) Reply frame received for 3 I0205 21:10:28.446906 9 log.go:172] (0xc0015334a0) (0xc001ee0140) Create stream I0205 21:10:28.446916 9 log.go:172] (0xc0015334a0) (0xc001ee0140) Stream added, broadcasting: 5 I0205 21:10:28.449369 9 log.go:172] (0xc0015334a0) Reply frame received for 5 I0205 21:10:28.599402 9 log.go:172] (0xc0015334a0) Data frame received for 3 I0205 21:10:28.599953 9 log.go:172] (0xc001ee00a0) (3) Data frame handling I0205 21:10:28.600004 9 log.go:172] (0xc001ee00a0) (3) Data frame sent I0205 21:10:28.717188 9 log.go:172] (0xc0015334a0) (0xc001ee0140) Stream removed, broadcasting: 5 I0205 21:10:28.717776 9 log.go:172] (0xc0015334a0) (0xc001ee00a0) Stream removed, broadcasting: 3 I0205 21:10:28.717822 9 log.go:172] (0xc0015334a0) Data frame received for 1 I0205 21:10:28.717842 9 log.go:172] (0xc002266fa0) (1) Data frame handling I0205 21:10:28.717904 9 log.go:172] (0xc002266fa0) (1) Data frame sent I0205 21:10:28.717913 9 log.go:172] (0xc0015334a0) (0xc002266fa0) Stream removed, broadcasting: 1 I0205 21:10:28.717935 9 log.go:172] (0xc0015334a0) Go away received I0205 21:10:28.718514 9 log.go:172] (0xc0015334a0) (0xc002266fa0) Stream removed, broadcasting: 1 I0205 21:10:28.718713 9 log.go:172] (0xc0015334a0) (0xc001ee00a0) Stream removed, broadcasting: 3 I0205 21:10:28.718738 9 log.go:172] (0xc0015334a0) (0xc001ee0140) Stream removed, broadcasting: 5 Feb 5 21:10:28.718: INFO: Exec stderr: "" Feb 5 21:10:28.719: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:28.719: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:28.769917 9 log.go:172] (0xc0014156b0) (0xc001dfcaa0) Create stream I0205 21:10:28.770332 9 log.go:172] (0xc0014156b0) (0xc001dfcaa0) Stream added, broadcasting: 1 I0205 21:10:28.775120 9 log.go:172] (0xc0014156b0) Reply frame received for 1 I0205 21:10:28.775228 9 log.go:172] (0xc0014156b0) (0xc0021f9220) Create stream I0205 21:10:28.775241 9 log.go:172] (0xc0014156b0) (0xc0021f9220) Stream added, broadcasting: 3 I0205 21:10:28.776611 9 log.go:172] (0xc0014156b0) Reply frame received for 3 I0205 21:10:28.776670 9 log.go:172] (0xc0014156b0) (0xc001d88000) Create stream I0205 21:10:28.776682 9 log.go:172] (0xc0014156b0) (0xc001d88000) Stream added, broadcasting: 5 I0205 21:10:28.778293 9 log.go:172] (0xc0014156b0) Reply frame received for 5 I0205 21:10:28.853918 9 log.go:172] (0xc0014156b0) Data frame received for 3 I0205 21:10:28.854071 9 log.go:172] (0xc0021f9220) (3) Data frame handling I0205 21:10:28.854091 9 log.go:172] (0xc0021f9220) (3) Data frame sent I0205 21:10:28.947097 9 log.go:172] (0xc0014156b0) (0xc0021f9220) Stream removed, broadcasting: 3 I0205 21:10:28.947246 9 log.go:172] (0xc0014156b0) Data frame received for 1 I0205 21:10:28.947254 9 log.go:172] (0xc001dfcaa0) (1) Data frame handling I0205 21:10:28.947268 9 log.go:172] (0xc001dfcaa0) (1) Data frame sent I0205 21:10:28.947280 9 log.go:172] (0xc0014156b0) (0xc001dfcaa0) Stream removed, broadcasting: 1 I0205 21:10:28.947546 9 log.go:172] (0xc0014156b0) (0xc001d88000) Stream removed, broadcasting: 5 I0205 21:10:28.947569 9 log.go:172] (0xc0014156b0) (0xc001dfcaa0) Stream removed, broadcasting: 1 I0205 21:10:28.947578 9 log.go:172] (0xc0014156b0) (0xc0021f9220) Stream removed, broadcasting: 3 I0205 21:10:28.947584 9 log.go:172] (0xc0014156b0) (0xc001d88000) Stream removed, broadcasting: 5 I0205 21:10:28.947819 9 log.go:172] (0xc0014156b0) Go away received Feb 5 21:10:28.948: INFO: Exec stderr: "" Feb 5 21:10:28.948: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:28.948: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:29.029986 9 log.go:172] (0xc00222e000) (0xc0021f94a0) Create stream I0205 21:10:29.030056 9 log.go:172] (0xc00222e000) (0xc0021f94a0) Stream added, broadcasting: 1 I0205 21:10:29.033360 9 log.go:172] (0xc00222e000) Reply frame received for 1 I0205 21:10:29.033409 9 log.go:172] (0xc00222e000) (0xc001ee01e0) Create stream I0205 21:10:29.033416 9 log.go:172] (0xc00222e000) (0xc001ee01e0) Stream added, broadcasting: 3 I0205 21:10:29.034472 9 log.go:172] (0xc00222e000) Reply frame received for 3 I0205 21:10:29.034486 9 log.go:172] (0xc00222e000) (0xc0021f9540) Create stream I0205 21:10:29.034492 9 log.go:172] (0xc00222e000) (0xc0021f9540) Stream added, broadcasting: 5 I0205 21:10:29.036479 9 log.go:172] (0xc00222e000) Reply frame received for 5 I0205 21:10:29.105958 9 log.go:172] (0xc00222e000) Data frame received for 3 I0205 21:10:29.106012 9 log.go:172] (0xc001ee01e0) (3) Data frame handling I0205 21:10:29.106029 9 log.go:172] (0xc001ee01e0) (3) Data frame sent I0205 21:10:29.165306 9 log.go:172] (0xc00222e000) (0xc0021f9540) Stream removed, broadcasting: 5 I0205 21:10:29.165390 9 log.go:172] (0xc00222e000) Data frame received for 1 I0205 21:10:29.165399 9 log.go:172] (0xc0021f94a0) (1) Data frame handling I0205 21:10:29.165416 9 log.go:172] (0xc0021f94a0) (1) Data frame sent I0205 21:10:29.165426 9 log.go:172] (0xc00222e000) (0xc001ee01e0) Stream removed, broadcasting: 3 I0205 21:10:29.165439 9 log.go:172] (0xc00222e000) (0xc0021f94a0) Stream removed, broadcasting: 1 I0205 21:10:29.165444 9 log.go:172] (0xc00222e000) Go away received I0205 21:10:29.165675 9 log.go:172] (0xc00222e000) (0xc0021f94a0) Stream removed, broadcasting: 1 I0205 21:10:29.165744 9 log.go:172] (0xc00222e000) (0xc001ee01e0) Stream removed, broadcasting: 3 I0205 21:10:29.165789 9 log.go:172] (0xc00222e000) (0xc0021f9540) Stream removed, broadcasting: 5 Feb 5 21:10:29.165: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Feb 5 21:10:29.165: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:29.166: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:29.253594 9 log.go:172] (0xc00222e630) (0xc0021f9720) Create stream I0205 21:10:29.253711 9 log.go:172] (0xc00222e630) (0xc0021f9720) Stream added, broadcasting: 1 I0205 21:10:29.258322 9 log.go:172] (0xc00222e630) Reply frame received for 1 I0205 21:10:29.258392 9 log.go:172] (0xc00222e630) (0xc002267180) Create stream I0205 21:10:29.258412 9 log.go:172] (0xc00222e630) (0xc002267180) Stream added, broadcasting: 3 I0205 21:10:29.259879 9 log.go:172] (0xc00222e630) Reply frame received for 3 I0205 21:10:29.259901 9 log.go:172] (0xc00222e630) (0xc0021f97c0) Create stream I0205 21:10:29.259910 9 log.go:172] (0xc00222e630) (0xc0021f97c0) Stream added, broadcasting: 5 I0205 21:10:29.261174 9 log.go:172] (0xc00222e630) Reply frame received for 5 I0205 21:10:29.344543 9 log.go:172] (0xc00222e630) Data frame received for 3 I0205 21:10:29.344636 9 log.go:172] (0xc002267180) (3) Data frame handling I0205 21:10:29.344648 9 log.go:172] (0xc002267180) (3) Data frame sent I0205 21:10:29.419036 9 log.go:172] (0xc00222e630) Data frame received for 1 I0205 21:10:29.419121 9 log.go:172] (0xc0021f9720) (1) Data frame handling I0205 21:10:29.419145 9 log.go:172] (0xc0021f9720) (1) Data frame sent I0205 21:10:29.419288 9 log.go:172] (0xc00222e630) (0xc0021f9720) Stream removed, broadcasting: 1 I0205 21:10:29.419904 9 log.go:172] (0xc00222e630) (0xc002267180) Stream removed, broadcasting: 3 I0205 21:10:29.419962 9 log.go:172] (0xc00222e630) (0xc0021f97c0) Stream removed, broadcasting: 5 I0205 21:10:29.419977 9 log.go:172] (0xc00222e630) Go away received I0205 21:10:29.420011 9 log.go:172] (0xc00222e630) (0xc0021f9720) Stream removed, broadcasting: 1 I0205 21:10:29.420032 9 log.go:172] (0xc00222e630) (0xc002267180) Stream removed, broadcasting: 3 I0205 21:10:29.420045 9 log.go:172] (0xc00222e630) (0xc0021f97c0) Stream removed, broadcasting: 5 Feb 5 21:10:29.420: INFO: Exec stderr: "" Feb 5 21:10:29.420: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:29.420: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:29.482329 9 log.go:172] (0xc00244e0b0) (0xc001ee05a0) Create stream I0205 21:10:29.482483 9 log.go:172] (0xc00244e0b0) (0xc001ee05a0) Stream added, broadcasting: 1 I0205 21:10:29.486070 9 log.go:172] (0xc00244e0b0) Reply frame received for 1 I0205 21:10:29.486126 9 log.go:172] (0xc00244e0b0) (0xc002267220) Create stream I0205 21:10:29.486134 9 log.go:172] (0xc00244e0b0) (0xc002267220) Stream added, broadcasting: 3 I0205 21:10:29.487459 9 log.go:172] (0xc00244e0b0) Reply frame received for 3 I0205 21:10:29.487519 9 log.go:172] (0xc00244e0b0) (0xc001d880a0) Create stream I0205 21:10:29.487530 9 log.go:172] (0xc00244e0b0) (0xc001d880a0) Stream added, broadcasting: 5 I0205 21:10:29.488246 9 log.go:172] (0xc00244e0b0) Reply frame received for 5 I0205 21:10:29.548575 9 log.go:172] (0xc00244e0b0) Data frame received for 3 I0205 21:10:29.548716 9 log.go:172] (0xc002267220) (3) Data frame handling I0205 21:10:29.548746 9 log.go:172] (0xc002267220) (3) Data frame sent I0205 21:10:29.614826 9 log.go:172] (0xc00244e0b0) Data frame received for 1 I0205 21:10:29.614919 9 log.go:172] (0xc00244e0b0) (0xc001d880a0) Stream removed, broadcasting: 5 I0205 21:10:29.614943 9 log.go:172] (0xc001ee05a0) (1) Data frame handling I0205 21:10:29.614953 9 log.go:172] (0xc001ee05a0) (1) Data frame sent I0205 21:10:29.614973 9 log.go:172] (0xc00244e0b0) (0xc002267220) Stream removed, broadcasting: 3 I0205 21:10:29.614986 9 log.go:172] (0xc00244e0b0) (0xc001ee05a0) Stream removed, broadcasting: 1 I0205 21:10:29.614996 9 log.go:172] (0xc00244e0b0) Go away received I0205 21:10:29.615073 9 log.go:172] (0xc00244e0b0) (0xc001ee05a0) Stream removed, broadcasting: 1 I0205 21:10:29.615098 9 log.go:172] (0xc00244e0b0) (0xc002267220) Stream removed, broadcasting: 3 I0205 21:10:29.615105 9 log.go:172] (0xc00244e0b0) (0xc001d880a0) Stream removed, broadcasting: 5 Feb 5 21:10:29.615: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Feb 5 21:10:29.615: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:29.615: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:29.663801 9 log.go:172] (0xc00202c580) (0xc001d88460) Create stream I0205 21:10:29.663941 9 log.go:172] (0xc00202c580) (0xc001d88460) Stream added, broadcasting: 1 I0205 21:10:29.666905 9 log.go:172] (0xc00202c580) Reply frame received for 1 I0205 21:10:29.666962 9 log.go:172] (0xc00202c580) (0xc0022672c0) Create stream I0205 21:10:29.666973 9 log.go:172] (0xc00202c580) (0xc0022672c0) Stream added, broadcasting: 3 I0205 21:10:29.668116 9 log.go:172] (0xc00202c580) Reply frame received for 3 I0205 21:10:29.668140 9 log.go:172] (0xc00202c580) (0xc002267360) Create stream I0205 21:10:29.668149 9 log.go:172] (0xc00202c580) (0xc002267360) Stream added, broadcasting: 5 I0205 21:10:29.668922 9 log.go:172] (0xc00202c580) Reply frame received for 5 I0205 21:10:29.737394 9 log.go:172] (0xc00202c580) Data frame received for 3 I0205 21:10:29.737523 9 log.go:172] (0xc0022672c0) (3) Data frame handling I0205 21:10:29.737537 9 log.go:172] (0xc0022672c0) (3) Data frame sent I0205 21:10:29.835566 9 log.go:172] (0xc00202c580) (0xc0022672c0) Stream removed, broadcasting: 3 I0205 21:10:29.835822 9 log.go:172] (0xc00202c580) (0xc002267360) Stream removed, broadcasting: 5 I0205 21:10:29.835906 9 log.go:172] (0xc00202c580) Data frame received for 1 I0205 21:10:29.835920 9 log.go:172] (0xc001d88460) (1) Data frame handling I0205 21:10:29.835938 9 log.go:172] (0xc001d88460) (1) Data frame sent I0205 21:10:29.835952 9 log.go:172] (0xc00202c580) (0xc001d88460) Stream removed, broadcasting: 1 I0205 21:10:29.835963 9 log.go:172] (0xc00202c580) Go away received I0205 21:10:29.836659 9 log.go:172] (0xc00202c580) (0xc001d88460) Stream removed, broadcasting: 1 I0205 21:10:29.836690 9 log.go:172] (0xc00202c580) (0xc0022672c0) Stream removed, broadcasting: 3 I0205 21:10:29.836717 9 log.go:172] (0xc00202c580) (0xc002267360) Stream removed, broadcasting: 5 Feb 5 21:10:29.836: INFO: Exec stderr: "" Feb 5 21:10:29.837: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:29.837: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:29.896350 9 log.go:172] (0xc00202c8f0) (0xc001d88640) Create stream I0205 21:10:29.896723 9 log.go:172] (0xc00202c8f0) (0xc001d88640) Stream added, broadcasting: 1 I0205 21:10:29.901986 9 log.go:172] (0xc00202c8f0) Reply frame received for 1 I0205 21:10:29.902149 9 log.go:172] (0xc00202c8f0) (0xc001ee0640) Create stream I0205 21:10:29.902167 9 log.go:172] (0xc00202c8f0) (0xc001ee0640) Stream added, broadcasting: 3 I0205 21:10:29.904362 9 log.go:172] (0xc00202c8f0) Reply frame received for 3 I0205 21:10:29.904405 9 log.go:172] (0xc00202c8f0) (0xc001d886e0) Create stream I0205 21:10:29.904417 9 log.go:172] (0xc00202c8f0) (0xc001d886e0) Stream added, broadcasting: 5 I0205 21:10:29.906333 9 log.go:172] (0xc00202c8f0) Reply frame received for 5 I0205 21:10:30.002309 9 log.go:172] (0xc00202c8f0) Data frame received for 3 I0205 21:10:30.002429 9 log.go:172] (0xc001ee0640) (3) Data frame handling I0205 21:10:30.002470 9 log.go:172] (0xc001ee0640) (3) Data frame sent I0205 21:10:30.063057 9 log.go:172] (0xc00202c8f0) Data frame received for 1 I0205 21:10:30.063253 9 log.go:172] (0xc00202c8f0) (0xc001d886e0) Stream removed, broadcasting: 5 I0205 21:10:30.063287 9 log.go:172] (0xc001d88640) (1) Data frame handling I0205 21:10:30.063337 9 log.go:172] (0xc001d88640) (1) Data frame sent I0205 21:10:30.063460 9 log.go:172] (0xc00202c8f0) (0xc001ee0640) Stream removed, broadcasting: 3 I0205 21:10:30.063591 9 log.go:172] (0xc00202c8f0) (0xc001d88640) Stream removed, broadcasting: 1 I0205 21:10:30.063607 9 log.go:172] (0xc00202c8f0) Go away received I0205 21:10:30.063697 9 log.go:172] (0xc00202c8f0) (0xc001d88640) Stream removed, broadcasting: 1 I0205 21:10:30.063725 9 log.go:172] (0xc00202c8f0) (0xc001ee0640) Stream removed, broadcasting: 3 I0205 21:10:30.063730 9 log.go:172] (0xc00202c8f0) (0xc001d886e0) Stream removed, broadcasting: 5 Feb 5 21:10:30.063: INFO: Exec stderr: "" Feb 5 21:10:30.063: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:10:30.063: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:30.111421 9 log.go:172] (0xc00222ec60) (0xc0021f99a0) Create stream I0205 21:10:30.111539 9 log.go:172] (0xc00222ec60) (0xc0021f99a0) Stream added, broadcasting: 1 I0205 21:10:30.113638 9 log.go:172] (0xc00222ec60) Reply frame received for 1 I0205 21:10:30.113658 9 log.go:172] (0xc00222ec60) (0xc002267400) Create stream I0205 21:10:30.113665 9 log.go:172] (0xc00222ec60) (0xc002267400) Stream added, broadcasting: 3 I0205 21:10:30.114632 9 log.go:172] (0xc00222ec60) Reply frame received for 3 I0205 21:10:30.114652 9 log.go:172] (0xc00222ec60) (0xc002267540) Create stream I0205 21:10:30.114661 9 log.go:172] (0xc00222ec60) (0xc002267540) Stream added, broadcasting: 5 I0205 21:10:30.116004 9 log.go:172] (0xc00222ec60) Reply frame received for 5 I0205 21:10:30.188330 9 log.go:172] (0xc00222ec60) Data frame received for 3 I0205 21:10:30.188589 9 log.go:172] (0xc002267400) (3) Data frame handling I0205 21:10:30.188629 9 log.go:172] (0xc002267400) (3) Data frame sent I0205 21:10:30.246862 9 log.go:172] (0xc00222ec60) (0xc002267400) Stream removed, broadcasting: 3 I0205 21:10:30.247067 9 log.go:172] (0xc00222ec60) Data frame received for 1 I0205 21:10:30.247078 9 log.go:172] (0xc0021f99a0) (1) Data frame handling I0205 21:10:30.247091 9 log.go:172] (0xc0021f99a0) (1) Data frame sent I0205 21:10:30.247100 9 log.go:172] (0xc00222ec60) (0xc0021f99a0) Stream removed, broadcasting: 1 I0205 21:10:30.247180 9 log.go:172] (0xc00222ec60) (0xc002267540) Stream removed, broadcasting: 5 I0205 21:10:30.247196 9 log.go:172] (0xc00222ec60) (0xc0021f99a0) Stream removed, broadcasting: 1 I0205 21:10:30.247203 9 log.go:172] (0xc00222ec60) (0xc002267400) Stream removed, broadcasting: 3 I0205 21:10:30.247213 9 log.go:172] (0xc00222ec60) (0xc002267540) Stream removed, broadcasting: 5 Feb 5 21:10:30.247: INFO: Exec stderr: "" Feb 5 21:10:30.247: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8393 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0205 21:10:30.247547 9 log.go:172] (0xc00222ec60) Go away received Feb 5 21:10:30.247: INFO: >>> kubeConfig: /root/.kube/config I0205 21:10:30.311257 9 log.go:172] (0xc0026d8000) (0xc001dfce60) Create stream I0205 21:10:30.311386 9 log.go:172] (0xc0026d8000) (0xc001dfce60) Stream added, broadcasting: 1 I0205 21:10:30.317302 9 log.go:172] (0xc0026d8000) Reply frame received for 1 I0205 21:10:30.317366 9 log.go:172] (0xc0026d8000) (0xc001ee06e0) Create stream I0205 21:10:30.317385 9 log.go:172] (0xc0026d8000) (0xc001ee06e0) Stream added, broadcasting: 3 I0205 21:10:30.318796 9 log.go:172] (0xc0026d8000) Reply frame received for 3 I0205 21:10:30.318831 9 log.go:172] (0xc0026d8000) (0xc001ee0780) Create stream I0205 21:10:30.318838 9 log.go:172] (0xc0026d8000) (0xc001ee0780) Stream added, broadcasting: 5 I0205 21:10:30.320500 9 log.go:172] (0xc0026d8000) Reply frame received for 5 I0205 21:10:30.382688 9 log.go:172] (0xc0026d8000) Data frame received for 3 I0205 21:10:30.382958 9 log.go:172] (0xc001ee06e0) (3) Data frame handling I0205 21:10:30.383032 9 log.go:172] (0xc001ee06e0) (3) Data frame sent I0205 21:10:30.449067 9 log.go:172] (0xc0026d8000) (0xc001ee06e0) Stream removed, broadcasting: 3 I0205 21:10:30.449476 9 log.go:172] (0xc0026d8000) Data frame received for 1 I0205 21:10:30.449526 9 log.go:172] (0xc001dfce60) (1) Data frame handling I0205 21:10:30.449596 9 log.go:172] (0xc001dfce60) (1) Data frame sent I0205 21:10:30.449633 9 log.go:172] (0xc0026d8000) (0xc001ee0780) Stream removed, broadcasting: 5 I0205 21:10:30.449674 9 log.go:172] (0xc0026d8000) (0xc001dfce60) Stream removed, broadcasting: 1 I0205 21:10:30.449700 9 log.go:172] (0xc0026d8000) Go away received I0205 21:10:30.450259 9 log.go:172] (0xc0026d8000) (0xc001dfce60) Stream removed, broadcasting: 1 I0205 21:10:30.450461 9 log.go:172] (0xc0026d8000) (0xc001ee06e0) Stream removed, broadcasting: 3 I0205 21:10:30.450476 9 log.go:172] (0xc0026d8000) (0xc001ee0780) Stream removed, broadcasting: 5 Feb 5 21:10:30.450: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:10:30.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8393" for this suite. • [SLOW TEST:28.767 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:10:30.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Feb 5 21:10:39.117: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-858 pod-service-account-96fc45ad-dbd4-4cb8-bc8c-31d13d7f5154 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 5 21:10:39.503: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-858 pod-service-account-96fc45ad-dbd4-4cb8-bc8c-31d13d7f5154 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 5 21:10:39.794: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-858 pod-service-account-96fc45ad-dbd4-4cb8-bc8c-31d13d7f5154 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:10:40.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-858" for this suite. • [SLOW TEST:9.700 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":9,"skipped":300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:10:40.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 5 21:10:40.391: INFO: Waiting up to 5m0s for pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84" in namespace "downward-api-7138" to be "success or failure" Feb 5 21:10:40.407: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84": Phase="Pending", Reason="", readiness=false. Elapsed: 15.456326ms Feb 5 21:10:42.427: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035051708s Feb 5 21:10:44.934: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.542720433s Feb 5 21:10:46.945: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.55357639s Feb 5 21:10:48.954: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.562307591s STEP: Saw pod success Feb 5 21:10:48.954: INFO: Pod "downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84" satisfied condition "success or failure" Feb 5 21:10:48.960: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84 container dapi-container: STEP: delete the pod Feb 5 21:10:49.193: INFO: Waiting for pod downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84 to disappear Feb 5 21:10:49.335: INFO: Pod downward-api-fbb83189-da3e-4e7d-98f7-c14f5bc61f84 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:10:49.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7138" for this suite. • [SLOW TEST:9.331 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":327,"failed":0} SSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:10:49.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Feb 5 21:10:49.660: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1102" to be "success or failure" Feb 5 21:10:49.666: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.939409ms Feb 5 21:10:51.933: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273126046s Feb 5 21:10:53.941: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281177721s Feb 5 21:10:56.628: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.967961239s Feb 5 21:10:58.639: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.979162255s Feb 5 21:11:00.647: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.986931455s STEP: Saw pod success Feb 5 21:11:00.647: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 5 21:11:00.650: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 5 21:11:01.348: INFO: Waiting for pod pod-host-path-test to disappear Feb 5 21:11:01.356: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:11:01.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1102" for this suite. • [SLOW TEST:12.119 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":330,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:11:01.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:11:02.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:11:04.429: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:11:07.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:11:08.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:11:10.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716533862, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:11:13.506: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API Feb 5 21:11:13.573: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:11:13.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6655" for this suite. STEP: Destroying namespace "webhook-6655-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.506 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":12,"skipped":340,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:11:14.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:11:14.437: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1" in namespace "projected-5034" to be "success or failure" Feb 5 21:11:14.478: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.906215ms Feb 5 21:11:16.697: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.259880542s Feb 5 21:11:18.704: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267133639s Feb 5 21:11:20.991: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.553877992s Feb 5 21:11:23.488: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.050995431s Feb 5 21:11:25.718: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.281017984s Feb 5 21:11:27.725: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.287892717s STEP: Saw pod success Feb 5 21:11:27.725: INFO: Pod "downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1" satisfied condition "success or failure" Feb 5 21:11:27.731: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1 container client-container: STEP: delete the pod Feb 5 21:11:27.801: INFO: Waiting for pod downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1 to disappear Feb 5 21:11:27.806: INFO: Pod downwardapi-volume-2023f82e-5520-4d67-8041-8944de40c7a1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:11:27.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5034" for this suite. • [SLOW TEST:14.025 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":342,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:11:28.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:11:28.537: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 5 21:11:31.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 create -f -' Feb 5 21:11:34.549: INFO: stderr: "" Feb 5 21:11:34.549: INFO: stdout: "e2e-test-crd-publish-openapi-3151-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 5 21:11:34.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 delete e2e-test-crd-publish-openapi-3151-crds test-foo' Feb 5 21:11:34.720: INFO: stderr: "" Feb 5 21:11:34.721: INFO: stdout: "e2e-test-crd-publish-openapi-3151-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 5 21:11:34.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 apply -f -' Feb 5 21:11:35.054: INFO: stderr: "" Feb 5 21:11:35.054: INFO: stdout: "e2e-test-crd-publish-openapi-3151-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 5 21:11:35.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 delete e2e-test-crd-publish-openapi-3151-crds test-foo' Feb 5 21:11:35.177: INFO: stderr: "" Feb 5 21:11:35.177: INFO: stdout: "e2e-test-crd-publish-openapi-3151-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 5 21:11:35.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 create -f -' Feb 5 21:11:35.458: INFO: rc: 1 Feb 5 21:11:35.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 apply -f -' Feb 5 21:11:35.717: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 5 21:11:35.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 create -f -' Feb 5 21:11:36.014: INFO: rc: 1 Feb 5 21:11:36.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-470 apply -f -' Feb 5 21:11:36.303: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 5 21:11:36.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3151-crds' Feb 5 21:11:36.610: INFO: stderr: "" Feb 5 21:11:36.610: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3151-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 5 21:11:36.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3151-crds.metadata' Feb 5 21:11:36.891: INFO: stderr: "" Feb 5 21:11:36.891: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3151-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 5 21:11:36.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3151-crds.spec' Feb 5 21:11:37.171: INFO: stderr: "" Feb 5 21:11:37.171: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3151-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 5 21:11:37.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3151-crds.spec.bars' Feb 5 21:11:37.470: INFO: stderr: "" Feb 5 21:11:37.470: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3151-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 5 21:11:37.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3151-crds.spec.bars2' Feb 5 21:11:37.749: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:11:40.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-470" for this suite. • [SLOW TEST:12.749 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":14,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:11:40.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:11:40.983: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a" in namespace "security-context-test-7793" to be "success or failure" Feb 5 21:11:40.989: INFO: Pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.903463ms Feb 5 21:11:42.997: INFO: Pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013713158s Feb 5 21:11:45.003: INFO: Pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019375387s Feb 5 21:11:47.011: INFO: Pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027687303s Feb 5 21:11:47.011: INFO: Pod "busybox-readonly-false-830a334a-cbe0-475a-b1a0-229f87258e2a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:11:47.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7793" for this suite. • [SLOW TEST:6.124 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":412,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:11:47.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7471 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 5 21:11:47.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 5 21:12:21.601: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:12:21.601: INFO: >>> kubeConfig: /root/.kube/config I0205 21:12:21.713687 9 log.go:172] (0xc002182000) (0xc00265a0a0) Create stream I0205 21:12:21.713844 9 log.go:172] (0xc002182000) (0xc00265a0a0) Stream added, broadcasting: 1 I0205 21:12:21.717551 9 log.go:172] (0xc002182000) Reply frame received for 1 I0205 21:12:21.717603 9 log.go:172] (0xc002182000) (0xc0027bc000) Create stream I0205 21:12:21.717612 9 log.go:172] (0xc002182000) (0xc0027bc000) Stream added, broadcasting: 3 I0205 21:12:21.719127 9 log.go:172] (0xc002182000) Reply frame received for 3 I0205 21:12:21.719234 9 log.go:172] (0xc002182000) (0xc0027bc0a0) Create stream I0205 21:12:21.719247 9 log.go:172] (0xc002182000) (0xc0027bc0a0) Stream added, broadcasting: 5 I0205 21:12:21.720956 9 log.go:172] (0xc002182000) Reply frame received for 5 I0205 21:12:22.829972 9 log.go:172] (0xc002182000) Data frame received for 3 I0205 21:12:22.830117 9 log.go:172] (0xc0027bc000) (3) Data frame handling I0205 21:12:22.830144 9 log.go:172] (0xc0027bc000) (3) Data frame sent I0205 21:12:22.963685 9 log.go:172] (0xc002182000) (0xc0027bc000) Stream removed, broadcasting: 3 I0205 21:12:22.963862 9 log.go:172] (0xc002182000) (0xc0027bc0a0) Stream removed, broadcasting: 5 I0205 21:12:22.963910 9 log.go:172] (0xc002182000) Data frame received for 1 I0205 21:12:22.963925 9 log.go:172] (0xc00265a0a0) (1) Data frame handling I0205 21:12:22.963948 9 log.go:172] (0xc00265a0a0) (1) Data frame sent I0205 21:12:22.963962 9 log.go:172] (0xc002182000) (0xc00265a0a0) Stream removed, broadcasting: 1 I0205 21:12:22.963976 9 log.go:172] (0xc002182000) Go away received I0205 21:12:22.964368 9 log.go:172] (0xc002182000) (0xc00265a0a0) Stream removed, broadcasting: 1 I0205 21:12:22.964553 9 log.go:172] (0xc002182000) (0xc0027bc000) Stream removed, broadcasting: 3 I0205 21:12:22.964576 9 log.go:172] (0xc002182000) (0xc0027bc0a0) Stream removed, broadcasting: 5 Feb 5 21:12:22.964: INFO: Found all expected endpoints: [netserver-0] Feb 5 21:12:22.975: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7471 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:12:22.975: INFO: >>> kubeConfig: /root/.kube/config I0205 21:12:23.029637 9 log.go:172] (0xc000e1ad10) (0xc0026da000) Create stream I0205 21:12:23.029715 9 log.go:172] (0xc000e1ad10) (0xc0026da000) Stream added, broadcasting: 1 I0205 21:12:23.034179 9 log.go:172] (0xc000e1ad10) Reply frame received for 1 I0205 21:12:23.034242 9 log.go:172] (0xc000e1ad10) (0xc0026da0a0) Create stream I0205 21:12:23.034248 9 log.go:172] (0xc000e1ad10) (0xc0026da0a0) Stream added, broadcasting: 3 I0205 21:12:23.035716 9 log.go:172] (0xc000e1ad10) Reply frame received for 3 I0205 21:12:23.035732 9 log.go:172] (0xc000e1ad10) (0xc0027483c0) Create stream I0205 21:12:23.035737 9 log.go:172] (0xc000e1ad10) (0xc0027483c0) Stream added, broadcasting: 5 I0205 21:12:23.037501 9 log.go:172] (0xc000e1ad10) Reply frame received for 5 I0205 21:12:24.148438 9 log.go:172] (0xc000e1ad10) Data frame received for 3 I0205 21:12:24.148560 9 log.go:172] (0xc0026da0a0) (3) Data frame handling I0205 21:12:24.148584 9 log.go:172] (0xc0026da0a0) (3) Data frame sent I0205 21:12:24.217108 9 log.go:172] (0xc000e1ad10) Data frame received for 1 I0205 21:12:24.217124 9 log.go:172] (0xc0026da000) (1) Data frame handling I0205 21:12:24.217130 9 log.go:172] (0xc0026da000) (1) Data frame sent I0205 21:12:24.223537 9 log.go:172] (0xc000e1ad10) (0xc0026da0a0) Stream removed, broadcasting: 3 I0205 21:12:24.223577 9 log.go:172] (0xc000e1ad10) (0xc0026da000) Stream removed, broadcasting: 1 I0205 21:12:24.223703 9 log.go:172] (0xc000e1ad10) (0xc0027483c0) Stream removed, broadcasting: 5 I0205 21:12:24.223776 9 log.go:172] (0xc000e1ad10) (0xc0026da000) Stream removed, broadcasting: 1 I0205 21:12:24.223791 9 log.go:172] (0xc000e1ad10) (0xc0026da0a0) Stream removed, broadcasting: 3 I0205 21:12:24.223798 9 log.go:172] (0xc000e1ad10) (0xc0027483c0) Stream removed, broadcasting: 5 I0205 21:12:24.223851 9 log.go:172] (0xc000e1ad10) Go away received Feb 5 21:12:24.224: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:12:24.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7471" for this suite. • [SLOW TEST:37.209 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:12:24.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1713 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 5 21:12:24.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1762' Feb 5 21:12:24.745: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 5 21:12:24.745: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1718 Feb 5 21:12:26.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1762' Feb 5 21:12:27.411: INFO: stderr: "" Feb 5 21:12:27.411: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:12:27.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1762" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":17,"skipped":436,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:12:27.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:12:27.787: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:12:28.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2114" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":18,"skipped":440,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:12:28.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4260.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4260.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 21:12:47.180: INFO: DNS probes using dns-4260/dns-test-99f0fd25-8708-410c-963b-65d444c9177a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:12:47.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4260" for this suite. • [SLOW TEST:18.484 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":19,"skipped":448,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:12:47.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:12:47.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832" in namespace "projected-6025" to be "success or failure" Feb 5 21:12:47.555: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Pending", Reason="", readiness=false. Elapsed: 57.399798ms Feb 5 21:12:49.561: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063485913s Feb 5 21:12:51.567: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069370378s Feb 5 21:12:53.576: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079055435s Feb 5 21:12:55.642: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Pending", Reason="", readiness=false. Elapsed: 8.144922412s Feb 5 21:12:57.649: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152085088s STEP: Saw pod success Feb 5 21:12:57.650: INFO: Pod "downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832" satisfied condition "success or failure" Feb 5 21:12:57.655: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832 container client-container: STEP: delete the pod Feb 5 21:12:57.725: INFO: Waiting for pod downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832 to disappear Feb 5 21:12:57.761: INFO: Pod downwardapi-volume-b28db5d7-0959-482f-9878-c0e2b8c71832 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:12:57.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6025" for this suite. • [SLOW TEST:10.408 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:12:57.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:13:12.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4420" for this suite. • [SLOW TEST:14.348 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":21,"skipped":498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:13:12.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-906.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-906.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-906.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-906.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 21:13:22.351: INFO: DNS probes using dns-906/dns-test-051386ad-71b1-4827-990d-113c7491adba succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:13:22.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-906" for this suite. • [SLOW TEST:10.332 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":22,"skipped":525,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:13:22.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:13:22.542: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:13:28.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7248" for this suite. • [SLOW TEST:5.923 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":23,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:13:28.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 5 21:13:28.655: INFO: Waiting up to 5m0s for pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668" in namespace "emptydir-9889" to be "success or failure" Feb 5 21:13:28.668: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668": Phase="Pending", Reason="", readiness=false. Elapsed: 12.363838ms Feb 5 21:13:30.677: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02207065s Feb 5 21:13:32.682: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026969334s Feb 5 21:13:34.693: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037639278s Feb 5 21:13:36.700: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0448694s STEP: Saw pod success Feb 5 21:13:36.700: INFO: Pod "pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668" satisfied condition "success or failure" Feb 5 21:13:36.704: INFO: Trying to get logs from node jerma-node pod pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668 container test-container: STEP: delete the pod Feb 5 21:13:37.370: INFO: Waiting for pod pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668 to disappear Feb 5 21:13:37.376: INFO: Pod pod-1b245e38-f323-4b2b-981c-ecd2c2cf1668 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:13:37.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9889" for this suite. • [SLOW TEST:9.054 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":563,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:13:37.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:13:48.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6462" for this suite. • [SLOW TEST:11.454 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":25,"skipped":570,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:13:48.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2119.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2119.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2119.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 21:13:59.203: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.222: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.228: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.234: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.252: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.256: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.260: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.264: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:13:59.274: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:04.280: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.285: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.293: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.333: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.336: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.339: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.342: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:04.349: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:09.284: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.290: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.297: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.331: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.337: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.342: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.347: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:09.356: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:14.323: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.335: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.342: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.348: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.369: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.372: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.375: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.378: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:14.387: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:19.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.290: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.297: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.302: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.319: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.324: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.327: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.330: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:19.336: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:24.284: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.289: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.301: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.347: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.355: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.364: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local from pod dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476: the server could not find the requested resource (get pods dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476) Feb 5 21:14:24.377: INFO: Lookups using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2119.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2119.svc.cluster.local jessie_udp@dns-test-service-2.dns-2119.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2119.svc.cluster.local] Feb 5 21:14:29.375: INFO: DNS probes using dns-2119/dns-test-0e8e7c62-4ee0-453d-9815-6185f6253476 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:14:29.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2119" for this suite. • [SLOW TEST:40.895 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":26,"skipped":578,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:14:29.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 5 21:14:38.162: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:14:38.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8518" for this suite. • [SLOW TEST:8.505 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":594,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:14:38.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 5 21:14:38.439: INFO: Waiting up to 5m0s for pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882" in namespace "emptydir-5552" to be "success or failure" Feb 5 21:14:38.455: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Pending", Reason="", readiness=false. Elapsed: 16.129072ms Feb 5 21:14:40.465: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026173797s Feb 5 21:14:42.476: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036872162s Feb 5 21:14:44.639: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Pending", Reason="", readiness=false. Elapsed: 6.200514148s Feb 5 21:14:46.667: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22809021s Feb 5 21:14:48.856: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.417477216s STEP: Saw pod success Feb 5 21:14:48.857: INFO: Pod "pod-e79efa70-70cb-4e54-a0ec-f942dab38882" satisfied condition "success or failure" Feb 5 21:14:48.868: INFO: Trying to get logs from node jerma-node pod pod-e79efa70-70cb-4e54-a0ec-f942dab38882 container test-container: STEP: delete the pod Feb 5 21:14:48.939: INFO: Waiting for pod pod-e79efa70-70cb-4e54-a0ec-f942dab38882 to disappear Feb 5 21:14:48.952: INFO: Pod pod-e79efa70-70cb-4e54-a0ec-f942dab38882 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:14:48.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5552" for this suite. • [SLOW TEST:11.006 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":599,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:14:49.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:14:50.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 5 21:14:52.314: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:14:54.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:14:56.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534090, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:14:59.364: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:14:59.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9398-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:15:00.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1603" for this suite. STEP: Destroying namespace "webhook-1603-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.942 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":29,"skipped":603,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:15:01.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Feb 5 21:15:01.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7642' Feb 5 21:15:03.098: INFO: stderr: "" Feb 5 21:15:03.098: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 5 21:15:03.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7642' Feb 5 21:15:03.332: INFO: stderr: "" Feb 5 21:15:03.333: INFO: stdout: "update-demo-nautilus-bksjw update-demo-nautilus-sb26g " Feb 5 21:15:03.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bksjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:03.452: INFO: stderr: "" Feb 5 21:15:03.453: INFO: stdout: "" Feb 5 21:15:03.453: INFO: update-demo-nautilus-bksjw is created but not running Feb 5 21:15:08.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7642' Feb 5 21:15:08.850: INFO: stderr: "" Feb 5 21:15:08.850: INFO: stdout: "update-demo-nautilus-bksjw update-demo-nautilus-sb26g " Feb 5 21:15:08.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bksjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:09.449: INFO: stderr: "" Feb 5 21:15:09.449: INFO: stdout: "" Feb 5 21:15:09.450: INFO: update-demo-nautilus-bksjw is created but not running Feb 5 21:15:14.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7642' Feb 5 21:15:14.668: INFO: stderr: "" Feb 5 21:15:14.669: INFO: stdout: "update-demo-nautilus-bksjw update-demo-nautilus-sb26g " Feb 5 21:15:14.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bksjw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:14.768: INFO: stderr: "" Feb 5 21:15:14.768: INFO: stdout: "true" Feb 5 21:15:14.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bksjw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:14.845: INFO: stderr: "" Feb 5 21:15:14.845: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 21:15:14.845: INFO: validating pod update-demo-nautilus-bksjw Feb 5 21:15:14.867: INFO: got data: { "image": "nautilus.jpg" } Feb 5 21:15:14.867: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 21:15:14.867: INFO: update-demo-nautilus-bksjw is verified up and running Feb 5 21:15:14.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb26g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:14.960: INFO: stderr: "" Feb 5 21:15:14.960: INFO: stdout: "true" Feb 5 21:15:14.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sb26g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7642' Feb 5 21:15:15.044: INFO: stderr: "" Feb 5 21:15:15.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 5 21:15:15.044: INFO: validating pod update-demo-nautilus-sb26g Feb 5 21:15:15.068: INFO: got data: { "image": "nautilus.jpg" } Feb 5 21:15:15.068: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 5 21:15:15.068: INFO: update-demo-nautilus-sb26g is verified up and running STEP: using delete to clean up resources Feb 5 21:15:15.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7642' Feb 5 21:15:15.186: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 21:15:15.186: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 5 21:15:15.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7642' Feb 5 21:15:15.294: INFO: stderr: "No resources found in kubectl-7642 namespace.\n" Feb 5 21:15:15.294: INFO: stdout: "" Feb 5 21:15:15.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7642 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 5 21:15:15.396: INFO: stderr: "" Feb 5 21:15:15.396: INFO: stdout: "update-demo-nautilus-bksjw\nupdate-demo-nautilus-sb26g\n" Feb 5 21:15:15.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7642' Feb 5 21:15:16.055: INFO: stderr: "No resources found in kubectl-7642 namespace.\n" Feb 5 21:15:16.055: INFO: stdout: "" Feb 5 21:15:16.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7642 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 5 21:15:16.180: INFO: stderr: "" Feb 5 21:15:16.180: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:15:16.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7642" for this suite. • [SLOW TEST:14.951 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":30,"skipped":611,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:15:16.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Feb 5 21:15:28.187: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:15:28.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5359" for this suite. • [SLOW TEST:12.159 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":31,"skipped":623,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:15:28.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8173/secret-test-695df191-f43c-4361-8bdf-9ba8425a68a7 STEP: Creating a pod to test consume secrets Feb 5 21:15:28.600: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02" in namespace "secrets-8173" to be "success or failure" Feb 5 21:15:28.618: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 18.238425ms Feb 5 21:15:30.628: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028236218s Feb 5 21:15:32.634: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034075918s Feb 5 21:15:34.644: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043556236s Feb 5 21:15:36.651: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051311566s Feb 5 21:15:38.664: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Pending", Reason="", readiness=false. Elapsed: 10.063464834s Feb 5 21:15:40.672: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.071475759s STEP: Saw pod success Feb 5 21:15:40.672: INFO: Pod "pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02" satisfied condition "success or failure" Feb 5 21:15:40.674: INFO: Trying to get logs from node jerma-node pod pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02 container env-test: STEP: delete the pod Feb 5 21:15:40.712: INFO: Waiting for pod pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02 to disappear Feb 5 21:15:40.729: INFO: Pod pod-configmaps-cb5ca235-d2dd-4b04-bcd4-3ec0c08d5b02 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:15:40.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8173" for this suite. • [SLOW TEST:12.405 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":628,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:15:40.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-e214ff5f-ae7e-4625-afc7-4325f1732857 in namespace container-probe-607 Feb 5 21:15:49.171: INFO: Started pod busybox-e214ff5f-ae7e-4625-afc7-4325f1732857 in namespace container-probe-607 STEP: checking the pod's current state and verifying that restartCount is present Feb 5 21:15:49.173: INFO: Initial restart count of pod busybox-e214ff5f-ae7e-4625-afc7-4325f1732857 is 0 Feb 5 21:16:37.479: INFO: Restart count of pod container-probe-607/busybox-e214ff5f-ae7e-4625-afc7-4325f1732857 is now 1 (48.305063899s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:16:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-607" for this suite. • [SLOW TEST:56.838 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:16:37.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2801.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2801.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.162_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2801.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2801.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2801.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2801.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2801.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.204.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.204.162_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 21:16:50.085: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.091: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.095: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.098: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.129: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.133: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.136: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.140: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:50.163: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:16:55.170: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.173: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.177: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.181: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.208: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.211: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.215: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.219: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:16:55.242: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:17:00.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.183: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.190: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.225: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.235: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.240: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:00.268: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:17:05.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.177: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.180: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.184: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.213: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.219: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.222: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:05.242: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:17:10.174: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.183: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.187: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.238: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.243: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.247: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:10.362: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:17:15.171: INFO: Unable to read wheezy_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.185: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.192: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.196: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.228: INFO: Unable to read jessie_udp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.232: INFO: Unable to read jessie_tcp@dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.236: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.240: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local from pod dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76: the server could not find the requested resource (get pods dns-test-166f6abf-6ac4-446f-b633-09245722bc76) Feb 5 21:17:15.263: INFO: Lookups using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 failed for: [wheezy_udp@dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@dns-test-service.dns-2801.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_udp@dns-test-service.dns-2801.svc.cluster.local jessie_tcp@dns-test-service.dns-2801.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2801.svc.cluster.local] Feb 5 21:17:20.265: INFO: DNS probes using dns-2801/dns-test-166f6abf-6ac4-446f-b633-09245722bc76 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:17:20.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2801" for this suite. • [SLOW TEST:42.958 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":34,"skipped":663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:17:20.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 5 21:17:36.744: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 5 21:17:36.755: INFO: Pod pod-with-prestop-http-hook still exists Feb 5 21:17:38.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 5 21:17:38.763: INFO: Pod pod-with-prestop-http-hook still exists Feb 5 21:17:40.755: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 5 21:17:40.761: INFO: Pod pod-with-prestop-http-hook still exists Feb 5 21:17:42.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 5 21:17:42.763: INFO: Pod pod-with-prestop-http-hook still exists Feb 5 21:17:44.756: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 5 21:17:44.763: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:17:44.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1103" for this suite. • [SLOW TEST:24.264 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":686,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:17:44.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Feb 5 21:17:44.942: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:18:02.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8012" for this suite. • [SLOW TEST:17.242 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":36,"skipped":715,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:18:02.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Feb 5 21:18:02.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3691 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 5 21:18:08.184: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0205 21:18:07.255935 741 log.go:172] (0xc0000f73f0) (0xc00095e1e0) Create stream\nI0205 21:18:07.256219 741 log.go:172] (0xc0000f73f0) (0xc00095e1e0) Stream added, broadcasting: 1\nI0205 21:18:07.261728 741 log.go:172] (0xc0000f73f0) Reply frame received for 1\nI0205 21:18:07.261787 741 log.go:172] (0xc0000f73f0) (0xc0006a39a0) Create stream\nI0205 21:18:07.261806 741 log.go:172] (0xc0000f73f0) (0xc0006a39a0) Stream added, broadcasting: 3\nI0205 21:18:07.263611 741 log.go:172] (0xc0000f73f0) Reply frame received for 3\nI0205 21:18:07.263686 741 log.go:172] (0xc0000f73f0) (0xc000688000) Create stream\nI0205 21:18:07.263712 741 log.go:172] (0xc0000f73f0) (0xc000688000) Stream added, broadcasting: 5\nI0205 21:18:07.266595 741 log.go:172] (0xc0000f73f0) Reply frame received for 5\nI0205 21:18:07.266637 741 log.go:172] (0xc0000f73f0) (0xc0006880a0) Create stream\nI0205 21:18:07.266652 741 log.go:172] (0xc0000f73f0) (0xc0006880a0) Stream added, broadcasting: 7\nI0205 21:18:07.269769 741 log.go:172] (0xc0000f73f0) Reply frame received for 7\nI0205 21:18:07.269994 741 log.go:172] (0xc0006a39a0) (3) Writing data frame\nI0205 21:18:07.270399 741 log.go:172] (0xc0006a39a0) (3) Writing data frame\nI0205 21:18:07.275830 741 log.go:172] (0xc0000f73f0) Data frame received for 5\nI0205 21:18:07.275890 741 log.go:172] (0xc000688000) (5) Data frame handling\nI0205 21:18:07.275911 741 log.go:172] (0xc000688000) (5) Data frame sent\nI0205 21:18:07.280424 741 log.go:172] (0xc0000f73f0) Data frame received for 5\nI0205 21:18:07.280443 741 log.go:172] (0xc000688000) (5) Data frame handling\nI0205 21:18:07.280461 741 log.go:172] (0xc000688000) (5) Data frame sent\nI0205 21:18:08.147910 741 log.go:172] (0xc0000f73f0) Data frame received for 1\nI0205 21:18:08.148033 741 log.go:172] (0xc0000f73f0) (0xc0006880a0) Stream removed, broadcasting: 7\nI0205 21:18:08.148082 741 log.go:172] (0xc00095e1e0) (1) Data frame handling\nI0205 21:18:08.148103 741 log.go:172] (0xc00095e1e0) (1) Data frame sent\nI0205 21:18:08.148151 741 log.go:172] (0xc0000f73f0) (0xc000688000) Stream removed, broadcasting: 5\nI0205 21:18:08.148202 741 log.go:172] (0xc0000f73f0) (0xc00095e1e0) Stream removed, broadcasting: 1\nI0205 21:18:08.148852 741 log.go:172] (0xc0000f73f0) (0xc00095e1e0) Stream removed, broadcasting: 1\nI0205 21:18:08.148977 741 log.go:172] (0xc0000f73f0) (0xc0006a39a0) Stream removed, broadcasting: 3\nI0205 21:18:08.149000 741 log.go:172] (0xc0000f73f0) (0xc000688000) Stream removed, broadcasting: 5\nI0205 21:18:08.149010 741 log.go:172] (0xc0000f73f0) (0xc0006880a0) Stream removed, broadcasting: 7\nI0205 21:18:08.149100 741 log.go:172] (0xc0000f73f0) (0xc0006a39a0) Stream removed, broadcasting: 3\nI0205 21:18:08.149136 741 log.go:172] (0xc0000f73f0) Go away received\n" Feb 5 21:18:08.185: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:18:10.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3691" for this suite. • [SLOW TEST:8.140 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1924 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":37,"skipped":732,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:18:10.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:18:28.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9954" for this suite. STEP: Destroying namespace "nsdeletetest-9851" for this suite. Feb 5 21:18:28.751: INFO: Namespace nsdeletetest-9851 was already deleted STEP: Destroying namespace "nsdeletetest-750" for this suite. • [SLOW TEST:18.554 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":38,"skipped":750,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:18:28.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:18:28.884: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860" in namespace "downward-api-5311" to be "success or failure" Feb 5 21:18:28.893: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860": Phase="Pending", Reason="", readiness=false. Elapsed: 8.815423ms Feb 5 21:18:30.901: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016668404s Feb 5 21:18:32.908: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0236762s Feb 5 21:18:34.917: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033189529s Feb 5 21:18:36.924: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039406403s STEP: Saw pod success Feb 5 21:18:36.924: INFO: Pod "downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860" satisfied condition "success or failure" Feb 5 21:18:36.928: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860 container client-container: STEP: delete the pod Feb 5 21:18:36.983: INFO: Waiting for pod downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860 to disappear Feb 5 21:18:37.100: INFO: Pod downwardapi-volume-78e84c33-7dd1-4355-996a-fd0db1373860 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:18:37.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5311" for this suite. • [SLOW TEST:8.350 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:18:37.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 5 21:18:37.165: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 5 21:18:37.279: INFO: Waiting for terminating namespaces to be deleted... Feb 5 21:18:37.283: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 5 21:18:37.294: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.294: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 21:18:37.294: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 5 21:18:37.294: INFO: Container weave ready: true, restart count 1 Feb 5 21:18:37.294: INFO: Container weave-npc ready: true, restart count 0 Feb 5 21:18:37.294: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 5 21:18:37.315: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container etcd ready: true, restart count 1 Feb 5 21:18:37.315: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container kube-apiserver ready: true, restart count 1 Feb 5 21:18:37.315: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container coredns ready: true, restart count 0 Feb 5 21:18:37.315: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container coredns ready: true, restart count 0 Feb 5 21:18:37.315: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 21:18:37.315: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 5 21:18:37.315: INFO: Container weave ready: true, restart count 0 Feb 5 21:18:37.315: INFO: Container weave-npc ready: true, restart count 0 Feb 5 21:18:37.315: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container kube-controller-manager ready: true, restart count 3 Feb 5 21:18:37.315: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 5 21:18:37.315: INFO: Container kube-scheduler ready: true, restart count 5 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-node STEP: verifying the node has the label node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node Feb 5 21:18:37.466: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub Feb 5 21:18:37.466: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node Feb 5 21:18:37.466: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub STEP: Starting Pods to consume most of the cluster CPU. Feb 5 21:18:37.466: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node Feb 5 21:18:37.479: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa.15f09e4bcb292571], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9306/filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa to jerma-server-mvvl6gufaqub] STEP: Considering event: Type = [Normal], Name = [filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa.15f09e4ce7484d07], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa.15f09e4e0a608cf9], Reason = [Created], Message = [Created container filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa] STEP: Considering event: Type = [Normal], Name = [filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa.15f09e4e31299c08], Reason = [Started], Message = [Started container filler-pod-09b8bc22-ab41-4a12-8fb7-b728863f1daa] STEP: Considering event: Type = [Normal], Name = [filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f.15f09e4bc655b5d3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9306/filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f to jerma-node] STEP: Considering event: Type = [Normal], Name = [filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f.15f09e4ca88668a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f.15f09e4d6d3f22a9], Reason = [Created], Message = [Created container filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f] STEP: Considering event: Type = [Normal], Name = [filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f.15f09e4dba391a72], Reason = [Started], Message = [Started container filler-pod-4c0aa5fc-c16e-46dc-aeb3-f0608034df2f] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f09e4e93cda3e2], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15f09e4e94ca3769], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.] STEP: removing the label node off the node jerma-node STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-server-mvvl6gufaqub STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:18:50.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9306" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:13.610 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":40,"skipped":786,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:18:50.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:18:51.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:18:53.275: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:18:55.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:18:58.098: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534331, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:19:00.444: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:01.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1974" for this suite. STEP: Destroying namespace "webhook-1974-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.844 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":41,"skipped":793,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:01.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:19:01.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004" in namespace "downward-api-6651" to be "success or failure" Feb 5 21:19:01.699: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Pending", Reason="", readiness=false. Elapsed: 7.564517ms Feb 5 21:19:03.947: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255516692s Feb 5 21:19:06.424: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Pending", Reason="", readiness=false. Elapsed: 4.732705284s Feb 5 21:19:08.431: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Pending", Reason="", readiness=false. Elapsed: 6.739458861s Feb 5 21:19:10.440: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Pending", Reason="", readiness=false. Elapsed: 8.748440527s Feb 5 21:19:12.446: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.754908358s STEP: Saw pod success Feb 5 21:19:12.447: INFO: Pod "downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004" satisfied condition "success or failure" Feb 5 21:19:12.449: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004 container client-container: STEP: delete the pod Feb 5 21:19:12.538: INFO: Waiting for pod downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004 to disappear Feb 5 21:19:12.546: INFO: Pod downwardapi-volume-d533290f-567d-4e0e-a33c-2c225607f004 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:12.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6651" for this suite. • [SLOW TEST:11.071 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":794,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:12.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 5 21:19:12.841: INFO: Waiting up to 5m0s for pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace" in namespace "emptydir-9297" to be "success or failure" Feb 5 21:19:12.871: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace": Phase="Pending", Reason="", readiness=false. Elapsed: 29.697412ms Feb 5 21:19:14.879: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036862155s Feb 5 21:19:16.886: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044116332s Feb 5 21:19:18.892: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05011174s Feb 5 21:19:20.899: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.057053159s STEP: Saw pod success Feb 5 21:19:20.899: INFO: Pod "pod-18dfcc1c-adb8-4495-861f-4fba32f28ace" satisfied condition "success or failure" Feb 5 21:19:20.903: INFO: Trying to get logs from node jerma-node pod pod-18dfcc1c-adb8-4495-861f-4fba32f28ace container test-container: STEP: delete the pod Feb 5 21:19:20.949: INFO: Waiting for pod pod-18dfcc1c-adb8-4495-861f-4fba32f28ace to disappear Feb 5 21:19:20.954: INFO: Pod pod-18dfcc1c-adb8-4495-861f-4fba32f28ace no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:20.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9297" for this suite. • [SLOW TEST:8.330 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":805,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:20.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:32.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4889" for this suite. • [SLOW TEST:11.233 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":44,"skipped":806,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:32.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:19:32.402: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15" in namespace "downward-api-3856" to be "success or failure" Feb 5 21:19:32.430: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15": Phase="Pending", Reason="", readiness=false. Elapsed: 27.530231ms Feb 5 21:19:34.441: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038839544s Feb 5 21:19:36.449: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046808866s Feb 5 21:19:38.458: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055452445s Feb 5 21:19:40.465: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062624754s STEP: Saw pod success Feb 5 21:19:40.465: INFO: Pod "downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15" satisfied condition "success or failure" Feb 5 21:19:40.469: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15 container client-container: STEP: delete the pod Feb 5 21:19:40.532: INFO: Waiting for pod downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15 to disappear Feb 5 21:19:40.538: INFO: Pod downwardapi-volume-cd435a0e-1321-4e3c-8cd5-ccf488813b15 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:40.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3856" for this suite. • [SLOW TEST:8.354 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:40.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:47.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6277" for this suite. STEP: Destroying namespace "nsdeletetest-6467" for this suite. Feb 5 21:19:47.096: INFO: Namespace nsdeletetest-6467 was already deleted STEP: Destroying namespace "nsdeletetest-5408" for this suite. • [SLOW TEST:6.545 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":46,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:47.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 5 21:19:55.856: INFO: Successfully updated pod "labelsupdateae00e560-4b3c-4421-8182-e57820b15c42" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:19:57.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6972" for this suite. • [SLOW TEST:10.844 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":859,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:19:57.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Feb 5 21:19:58.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4167' Feb 5 21:19:58.408: INFO: stderr: "" Feb 5 21:19:58.409: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 5 21:19:59.415: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:19:59.415: INFO: Found 0 / 1 Feb 5 21:20:00.416: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:00.416: INFO: Found 0 / 1 Feb 5 21:20:01.899: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:01.899: INFO: Found 0 / 1 Feb 5 21:20:02.426: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:02.426: INFO: Found 0 / 1 Feb 5 21:20:03.429: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:03.429: INFO: Found 0 / 1 Feb 5 21:20:04.465: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:04.465: INFO: Found 0 / 1 Feb 5 21:20:05.416: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:05.416: INFO: Found 0 / 1 Feb 5 21:20:06.415: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:06.415: INFO: Found 0 / 1 Feb 5 21:20:07.450: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:07.450: INFO: Found 1 / 1 Feb 5 21:20:07.450: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 5 21:20:07.579: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:07.579: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 5 21:20:07.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-jk888 --namespace=kubectl-4167 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 5 21:20:07.704: INFO: stderr: "" Feb 5 21:20:07.704: INFO: stdout: "pod/agnhost-master-jk888 patched\n" STEP: checking annotations Feb 5 21:20:07.710: INFO: Selector matched 1 pods for map[app:agnhost] Feb 5 21:20:07.710: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:20:07.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4167" for this suite. • [SLOW TEST:9.771 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":48,"skipped":874,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:20:07.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-mbdk STEP: Creating a pod to test atomic-volume-subpath Feb 5 21:20:08.129: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mbdk" in namespace "subpath-6181" to be "success or failure" Feb 5 21:20:08.158: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Pending", Reason="", readiness=false. Elapsed: 28.885002ms Feb 5 21:20:10.167: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037984696s Feb 5 21:20:12.177: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047803528s Feb 5 21:20:14.184: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054662719s Feb 5 21:20:16.196: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066880201s Feb 5 21:20:18.203: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 10.074227332s Feb 5 21:20:20.209: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 12.080055631s Feb 5 21:20:22.214: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 14.08524985s Feb 5 21:20:24.229: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 16.100038019s Feb 5 21:20:26.236: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 18.107159177s Feb 5 21:20:28.245: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 20.11557471s Feb 5 21:20:30.253: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 22.123886042s Feb 5 21:20:32.259: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 24.129564304s Feb 5 21:20:34.266: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 26.136621495s Feb 5 21:20:36.271: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Running", Reason="", readiness=true. Elapsed: 28.142159994s Feb 5 21:20:38.281: INFO: Pod "pod-subpath-test-configmap-mbdk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.15180841s STEP: Saw pod success Feb 5 21:20:38.281: INFO: Pod "pod-subpath-test-configmap-mbdk" satisfied condition "success or failure" Feb 5 21:20:38.286: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-mbdk container test-container-subpath-configmap-mbdk: STEP: delete the pod Feb 5 21:20:38.472: INFO: Waiting for pod pod-subpath-test-configmap-mbdk to disappear Feb 5 21:20:38.492: INFO: Pod pod-subpath-test-configmap-mbdk no longer exists STEP: Deleting pod pod-subpath-test-configmap-mbdk Feb 5 21:20:38.492: INFO: Deleting pod "pod-subpath-test-configmap-mbdk" in namespace "subpath-6181" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:20:38.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6181" for this suite. • [SLOW TEST:30.794 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":49,"skipped":876,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:20:38.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:20:38.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5293" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":50,"skipped":877,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:20:38.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:20:39.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0" in namespace "downward-api-4242" to be "success or failure" Feb 5 21:20:40.621: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.474950111s Feb 5 21:20:42.635: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488867058s Feb 5 21:20:44.641: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.495115968s Feb 5 21:20:46.646: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.499916588s Feb 5 21:20:48.654: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.507370737s STEP: Saw pod success Feb 5 21:20:48.654: INFO: Pod "downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0" satisfied condition "success or failure" Feb 5 21:20:48.659: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0 container client-container: STEP: delete the pod Feb 5 21:20:48.897: INFO: Waiting for pod downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0 to disappear Feb 5 21:20:48.926: INFO: Pod downwardapi-volume-c4160cdd-4e0c-4641-a25e-7d1be03d71d0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:20:48.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4242" for this suite. • [SLOW TEST:10.118 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:20:48.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8137 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8137 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8137 Feb 5 21:20:49.270: INFO: Found 0 stateful pods, waiting for 1 Feb 5 21:20:59.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 5 21:20:59.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:20:59.687: INFO: stderr: "I0205 21:20:59.479146 802 log.go:172] (0xc000105ce0) (0xc0008ca640) Create stream\nI0205 21:20:59.479304 802 log.go:172] (0xc000105ce0) (0xc0008ca640) Stream added, broadcasting: 1\nI0205 21:20:59.489417 802 log.go:172] (0xc000105ce0) Reply frame received for 1\nI0205 21:20:59.489468 802 log.go:172] (0xc000105ce0) (0xc000672820) Create stream\nI0205 21:20:59.489478 802 log.go:172] (0xc000105ce0) (0xc000672820) Stream added, broadcasting: 3\nI0205 21:20:59.491318 802 log.go:172] (0xc000105ce0) Reply frame received for 3\nI0205 21:20:59.491401 802 log.go:172] (0xc000105ce0) (0xc0004ed5e0) Create stream\nI0205 21:20:59.491414 802 log.go:172] (0xc000105ce0) (0xc0004ed5e0) Stream added, broadcasting: 5\nI0205 21:20:59.493617 802 log.go:172] (0xc000105ce0) Reply frame received for 5\nI0205 21:20:59.557064 802 log.go:172] (0xc000105ce0) Data frame received for 5\nI0205 21:20:59.557148 802 log.go:172] (0xc0004ed5e0) (5) Data frame handling\nI0205 21:20:59.557186 802 log.go:172] (0xc0004ed5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:20:59.610141 802 log.go:172] (0xc000105ce0) Data frame received for 3\nI0205 21:20:59.610189 802 log.go:172] (0xc000672820) (3) Data frame handling\nI0205 21:20:59.610210 802 log.go:172] (0xc000672820) (3) Data frame sent\nI0205 21:20:59.680090 802 log.go:172] (0xc000105ce0) Data frame received for 1\nI0205 21:20:59.680203 802 log.go:172] (0xc000105ce0) (0xc000672820) Stream removed, broadcasting: 3\nI0205 21:20:59.680262 802 log.go:172] (0xc0008ca640) (1) Data frame handling\nI0205 21:20:59.680278 802 log.go:172] (0xc0008ca640) (1) Data frame sent\nI0205 21:20:59.680318 802 log.go:172] (0xc000105ce0) (0xc0004ed5e0) Stream removed, broadcasting: 5\nI0205 21:20:59.680342 802 log.go:172] (0xc000105ce0) (0xc0008ca640) Stream removed, broadcasting: 1\nI0205 21:20:59.680355 802 log.go:172] (0xc000105ce0) Go away received\nI0205 21:20:59.681143 802 log.go:172] (0xc000105ce0) (0xc0008ca640) Stream removed, broadcasting: 1\nI0205 21:20:59.681155 802 log.go:172] (0xc000105ce0) (0xc000672820) Stream removed, broadcasting: 3\nI0205 21:20:59.681166 802 log.go:172] (0xc000105ce0) (0xc0004ed5e0) Stream removed, broadcasting: 5\n" Feb 5 21:20:59.688: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:20:59.688: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:20:59.692: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 5 21:21:09.699: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:21:09.699: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:21:09.778: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999565s Feb 5 21:21:10.785: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.933838462s Feb 5 21:21:11.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.927122083s Feb 5 21:21:12.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.919272309s Feb 5 21:21:13.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.913518358s Feb 5 21:21:14.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.903931249s Feb 5 21:21:15.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.896935984s Feb 5 21:21:16.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.88923101s Feb 5 21:21:17.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.882527568s Feb 5 21:21:18.845: INFO: Verifying statefulset ss doesn't scale past 1 for another 875.014724ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8137 Feb 5 21:21:19.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:21:20.262: INFO: stderr: "I0205 21:21:20.084602 822 log.go:172] (0xc0000fcc60) (0xc000677a40) Create stream\nI0205 21:21:20.084755 822 log.go:172] (0xc0000fcc60) (0xc000677a40) Stream added, broadcasting: 1\nI0205 21:21:20.088280 822 log.go:172] (0xc0000fcc60) Reply frame received for 1\nI0205 21:21:20.088398 822 log.go:172] (0xc0000fcc60) (0xc000638000) Create stream\nI0205 21:21:20.088426 822 log.go:172] (0xc0000fcc60) (0xc000638000) Stream added, broadcasting: 3\nI0205 21:21:20.090746 822 log.go:172] (0xc0000fcc60) Reply frame received for 3\nI0205 21:21:20.090806 822 log.go:172] (0xc0000fcc60) (0xc000677c20) Create stream\nI0205 21:21:20.090820 822 log.go:172] (0xc0000fcc60) (0xc000677c20) Stream added, broadcasting: 5\nI0205 21:21:20.092165 822 log.go:172] (0xc0000fcc60) Reply frame received for 5\nI0205 21:21:20.172320 822 log.go:172] (0xc0000fcc60) Data frame received for 5\nI0205 21:21:20.172591 822 log.go:172] (0xc000677c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:21:20.172702 822 log.go:172] (0xc000677c20) (5) Data frame sent\nI0205 21:21:20.173188 822 log.go:172] (0xc0000fcc60) Data frame received for 3\nI0205 21:21:20.173231 822 log.go:172] (0xc000638000) (3) Data frame handling\nI0205 21:21:20.173249 822 log.go:172] (0xc000638000) (3) Data frame sent\nI0205 21:21:20.247633 822 log.go:172] (0xc0000fcc60) Data frame received for 1\nI0205 21:21:20.247796 822 log.go:172] (0xc0000fcc60) (0xc000638000) Stream removed, broadcasting: 3\nI0205 21:21:20.247875 822 log.go:172] (0xc000677a40) (1) Data frame handling\nI0205 21:21:20.247896 822 log.go:172] (0xc000677a40) (1) Data frame sent\nI0205 21:21:20.247987 822 log.go:172] (0xc0000fcc60) (0xc000677c20) Stream removed, broadcasting: 5\nI0205 21:21:20.248111 822 log.go:172] (0xc0000fcc60) (0xc000677a40) Stream removed, broadcasting: 1\nI0205 21:21:20.248190 822 log.go:172] (0xc0000fcc60) Go away received\nI0205 21:21:20.249079 822 log.go:172] (0xc0000fcc60) (0xc000677a40) Stream removed, broadcasting: 1\nI0205 21:21:20.249090 822 log.go:172] (0xc0000fcc60) (0xc000638000) Stream removed, broadcasting: 3\nI0205 21:21:20.249106 822 log.go:172] (0xc0000fcc60) (0xc000677c20) Stream removed, broadcasting: 5\n" Feb 5 21:21:20.262: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:21:20.262: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:21:20.338: INFO: Found 1 stateful pods, waiting for 3 Feb 5 21:21:30.345: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:21:30.346: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:21:30.346: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 5 21:21:40.348: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:21:40.348: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:21:40.348: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 5 21:21:40.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:21:43.244: INFO: stderr: "I0205 21:21:43.068268 844 log.go:172] (0xc000116fd0) (0xc0006c1ea0) Create stream\nI0205 21:21:43.068538 844 log.go:172] (0xc000116fd0) (0xc0006c1ea0) Stream added, broadcasting: 1\nI0205 21:21:43.074098 844 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0205 21:21:43.074253 844 log.go:172] (0xc000116fd0) (0xc0006c1f40) Create stream\nI0205 21:21:43.074269 844 log.go:172] (0xc000116fd0) (0xc0006c1f40) Stream added, broadcasting: 3\nI0205 21:21:43.076213 844 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0205 21:21:43.076243 844 log.go:172] (0xc000116fd0) (0xc0005e2780) Create stream\nI0205 21:21:43.076251 844 log.go:172] (0xc000116fd0) (0xc0005e2780) Stream added, broadcasting: 5\nI0205 21:21:43.078492 844 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0205 21:21:43.162518 844 log.go:172] (0xc000116fd0) Data frame received for 3\nI0205 21:21:43.162624 844 log.go:172] (0xc0006c1f40) (3) Data frame handling\nI0205 21:21:43.162659 844 log.go:172] (0xc0006c1f40) (3) Data frame sent\nI0205 21:21:43.162759 844 log.go:172] (0xc000116fd0) Data frame received for 5\nI0205 21:21:43.162774 844 log.go:172] (0xc0005e2780) (5) Data frame handling\nI0205 21:21:43.162796 844 log.go:172] (0xc0005e2780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:21:43.231038 844 log.go:172] (0xc000116fd0) (0xc0006c1f40) Stream removed, broadcasting: 3\nI0205 21:21:43.231236 844 log.go:172] (0xc000116fd0) Data frame received for 1\nI0205 21:21:43.231285 844 log.go:172] (0xc0006c1ea0) (1) Data frame handling\nI0205 21:21:43.231317 844 log.go:172] (0xc0006c1ea0) (1) Data frame sent\nI0205 21:21:43.231337 844 log.go:172] (0xc000116fd0) (0xc0006c1ea0) Stream removed, broadcasting: 1\nI0205 21:21:43.231391 844 log.go:172] (0xc000116fd0) (0xc0005e2780) Stream removed, broadcasting: 5\nI0205 21:21:43.231498 844 log.go:172] (0xc000116fd0) Go away received\nI0205 21:21:43.232510 844 log.go:172] (0xc000116fd0) (0xc0006c1ea0) Stream removed, broadcasting: 1\nI0205 21:21:43.232530 844 log.go:172] (0xc000116fd0) (0xc0006c1f40) Stream removed, broadcasting: 3\nI0205 21:21:43.232540 844 log.go:172] (0xc000116fd0) (0xc0005e2780) Stream removed, broadcasting: 5\n" Feb 5 21:21:43.244: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:21:43.244: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:21:43.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:21:43.622: INFO: stderr: "I0205 21:21:43.421893 871 log.go:172] (0xc000a9cbb0) (0xc000a0e5a0) Create stream\nI0205 21:21:43.422144 871 log.go:172] (0xc000a9cbb0) (0xc000a0e5a0) Stream added, broadcasting: 1\nI0205 21:21:43.425446 871 log.go:172] (0xc000a9cbb0) Reply frame received for 1\nI0205 21:21:43.425486 871 log.go:172] (0xc000a9cbb0) (0xc000a7a0a0) Create stream\nI0205 21:21:43.425498 871 log.go:172] (0xc000a9cbb0) (0xc000a7a0a0) Stream added, broadcasting: 3\nI0205 21:21:43.426503 871 log.go:172] (0xc000a9cbb0) Reply frame received for 3\nI0205 21:21:43.426531 871 log.go:172] (0xc000a9cbb0) (0xc0009cc0a0) Create stream\nI0205 21:21:43.426581 871 log.go:172] (0xc000a9cbb0) (0xc0009cc0a0) Stream added, broadcasting: 5\nI0205 21:21:43.427928 871 log.go:172] (0xc000a9cbb0) Reply frame received for 5\nI0205 21:21:43.495170 871 log.go:172] (0xc000a9cbb0) Data frame received for 5\nI0205 21:21:43.495267 871 log.go:172] (0xc0009cc0a0) (5) Data frame handling\nI0205 21:21:43.495286 871 log.go:172] (0xc0009cc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:21:43.514389 871 log.go:172] (0xc000a9cbb0) Data frame received for 3\nI0205 21:21:43.514425 871 log.go:172] (0xc000a7a0a0) (3) Data frame handling\nI0205 21:21:43.514447 871 log.go:172] (0xc000a7a0a0) (3) Data frame sent\nI0205 21:21:43.591024 871 log.go:172] (0xc000a9cbb0) (0xc000a7a0a0) Stream removed, broadcasting: 3\nI0205 21:21:43.591211 871 log.go:172] (0xc000a9cbb0) Data frame received for 1\nI0205 21:21:43.591248 871 log.go:172] (0xc000a0e5a0) (1) Data frame handling\nI0205 21:21:43.591276 871 log.go:172] (0xc000a0e5a0) (1) Data frame sent\nI0205 21:21:43.591293 871 log.go:172] (0xc000a9cbb0) (0xc000a0e5a0) Stream removed, broadcasting: 1\nI0205 21:21:43.602902 871 log.go:172] (0xc000a9cbb0) (0xc0009cc0a0) Stream removed, broadcasting: 5\nI0205 21:21:43.604249 871 log.go:172] (0xc000a9cbb0) (0xc000a0e5a0) Stream removed, broadcasting: 1\nI0205 21:21:43.604474 871 log.go:172] (0xc000a9cbb0) (0xc000a7a0a0) Stream removed, broadcasting: 3\nI0205 21:21:43.607991 871 log.go:172] (0xc000a9cbb0) (0xc0009cc0a0) Stream removed, broadcasting: 5\nI0205 21:21:43.608113 871 log.go:172] (0xc000a9cbb0) Go away received\n" Feb 5 21:21:43.623: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:21:43.623: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:21:43.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:21:44.181: INFO: stderr: "I0205 21:21:43.884134 891 log.go:172] (0xc00053a9a0) (0xc0005de280) Create stream\nI0205 21:21:43.884554 891 log.go:172] (0xc00053a9a0) (0xc0005de280) Stream added, broadcasting: 1\nI0205 21:21:43.908599 891 log.go:172] (0xc00053a9a0) Reply frame received for 1\nI0205 21:21:43.908866 891 log.go:172] (0xc00053a9a0) (0xc0005de320) Create stream\nI0205 21:21:43.908894 891 log.go:172] (0xc00053a9a0) (0xc0005de320) Stream added, broadcasting: 3\nI0205 21:21:43.914104 891 log.go:172] (0xc00053a9a0) Reply frame received for 3\nI0205 21:21:43.914260 891 log.go:172] (0xc00053a9a0) (0xc000287ae0) Create stream\nI0205 21:21:43.914306 891 log.go:172] (0xc00053a9a0) (0xc000287ae0) Stream added, broadcasting: 5\nI0205 21:21:43.921372 891 log.go:172] (0xc00053a9a0) Reply frame received for 5\nI0205 21:21:44.054574 891 log.go:172] (0xc00053a9a0) Data frame received for 5\nI0205 21:21:44.055022 891 log.go:172] (0xc000287ae0) (5) Data frame handling\nI0205 21:21:44.055163 891 log.go:172] (0xc000287ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:21:44.066227 891 log.go:172] (0xc00053a9a0) Data frame received for 3\nI0205 21:21:44.066717 891 log.go:172] (0xc0005de320) (3) Data frame handling\nI0205 21:21:44.066816 891 log.go:172] (0xc0005de320) (3) Data frame sent\nI0205 21:21:44.164265 891 log.go:172] (0xc00053a9a0) Data frame received for 1\nI0205 21:21:44.164385 891 log.go:172] (0xc00053a9a0) (0xc0005de320) Stream removed, broadcasting: 3\nI0205 21:21:44.164452 891 log.go:172] (0xc0005de280) (1) Data frame handling\nI0205 21:21:44.164476 891 log.go:172] (0xc0005de280) (1) Data frame sent\nI0205 21:21:44.164521 891 log.go:172] (0xc00053a9a0) (0xc000287ae0) Stream removed, broadcasting: 5\nI0205 21:21:44.164575 891 log.go:172] (0xc00053a9a0) (0xc0005de280) Stream removed, broadcasting: 1\nI0205 21:21:44.164596 891 log.go:172] (0xc00053a9a0) Go away received\nI0205 21:21:44.166313 891 log.go:172] (0xc00053a9a0) (0xc0005de280) Stream removed, broadcasting: 1\nI0205 21:21:44.166338 891 log.go:172] (0xc00053a9a0) (0xc0005de320) Stream removed, broadcasting: 3\nI0205 21:21:44.166351 891 log.go:172] (0xc00053a9a0) (0xc000287ae0) Stream removed, broadcasting: 5\n" Feb 5 21:21:44.182: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:21:44.182: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:21:44.182: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:21:44.190: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 5 21:21:54.206: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:21:54.206: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:21:54.206: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:21:54.237: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999639s Feb 5 21:21:55.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978976329s Feb 5 21:21:56.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971634316s Feb 5 21:21:57.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.963598402s Feb 5 21:21:58.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.954382009s Feb 5 21:21:59.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.945751625s Feb 5 21:22:00.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.936776866s Feb 5 21:22:01.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.930963636s Feb 5 21:22:02.298: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.923865693s Feb 5 21:22:03.308: INFO: Verifying statefulset ss doesn't scale past 3 for another 918.327647ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8137 Feb 5 21:22:04.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:22:05.094: INFO: stderr: "I0205 21:22:04.727247 913 log.go:172] (0xc0007baa50) (0xc000802000) Create stream\nI0205 21:22:04.727509 913 log.go:172] (0xc0007baa50) (0xc000802000) Stream added, broadcasting: 1\nI0205 21:22:04.730803 913 log.go:172] (0xc0007baa50) Reply frame received for 1\nI0205 21:22:04.730855 913 log.go:172] (0xc0007baa50) (0xc0007123c0) Create stream\nI0205 21:22:04.730865 913 log.go:172] (0xc0007baa50) (0xc0007123c0) Stream added, broadcasting: 3\nI0205 21:22:04.732432 913 log.go:172] (0xc0007baa50) Reply frame received for 3\nI0205 21:22:04.732457 913 log.go:172] (0xc0007baa50) (0xc0008020a0) Create stream\nI0205 21:22:04.732465 913 log.go:172] (0xc0007baa50) (0xc0008020a0) Stream added, broadcasting: 5\nI0205 21:22:04.734603 913 log.go:172] (0xc0007baa50) Reply frame received for 5\nI0205 21:22:04.972923 913 log.go:172] (0xc0007baa50) Data frame received for 3\nI0205 21:22:04.973028 913 log.go:172] (0xc0007123c0) (3) Data frame handling\nI0205 21:22:04.973052 913 log.go:172] (0xc0007123c0) (3) Data frame sent\nI0205 21:22:04.973115 913 log.go:172] (0xc0007baa50) Data frame received for 5\nI0205 21:22:04.973120 913 log.go:172] (0xc0008020a0) (5) Data frame handling\nI0205 21:22:04.973127 913 log.go:172] (0xc0008020a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:22:05.082705 913 log.go:172] (0xc0007baa50) Data frame received for 1\nI0205 21:22:05.083050 913 log.go:172] (0xc0007baa50) (0xc0008020a0) Stream removed, broadcasting: 5\nI0205 21:22:05.083163 913 log.go:172] (0xc000802000) (1) Data frame handling\nI0205 21:22:05.083215 913 log.go:172] (0xc000802000) (1) Data frame sent\nI0205 21:22:05.083244 913 log.go:172] (0xc0007baa50) (0xc0007123c0) Stream removed, broadcasting: 3\nI0205 21:22:05.083305 913 log.go:172] (0xc0007baa50) (0xc000802000) Stream removed, broadcasting: 1\nI0205 21:22:05.083327 913 log.go:172] (0xc0007baa50) Go away received\nI0205 21:22:05.084614 913 log.go:172] (0xc0007baa50) (0xc000802000) Stream removed, broadcasting: 1\nI0205 21:22:05.084634 913 log.go:172] (0xc0007baa50) (0xc0007123c0) Stream removed, broadcasting: 3\nI0205 21:22:05.084650 913 log.go:172] (0xc0007baa50) (0xc0008020a0) Stream removed, broadcasting: 5\n" Feb 5 21:22:05.094: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:22:05.094: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:22:05.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:22:05.481: INFO: stderr: "I0205 21:22:05.285683 932 log.go:172] (0xc000acc000) (0xc000a80320) Create stream\nI0205 21:22:05.285855 932 log.go:172] (0xc000acc000) (0xc000a80320) Stream added, broadcasting: 1\nI0205 21:22:05.292169 932 log.go:172] (0xc000acc000) Reply frame received for 1\nI0205 21:22:05.292235 932 log.go:172] (0xc000acc000) (0xc000a803c0) Create stream\nI0205 21:22:05.292245 932 log.go:172] (0xc000acc000) (0xc000a803c0) Stream added, broadcasting: 3\nI0205 21:22:05.293277 932 log.go:172] (0xc000acc000) Reply frame received for 3\nI0205 21:22:05.293305 932 log.go:172] (0xc000acc000) (0xc000a60000) Create stream\nI0205 21:22:05.293321 932 log.go:172] (0xc000acc000) (0xc000a60000) Stream added, broadcasting: 5\nI0205 21:22:05.294127 932 log.go:172] (0xc000acc000) Reply frame received for 5\nI0205 21:22:05.382515 932 log.go:172] (0xc000acc000) Data frame received for 3\nI0205 21:22:05.382742 932 log.go:172] (0xc000a803c0) (3) Data frame handling\nI0205 21:22:05.382758 932 log.go:172] (0xc000a803c0) (3) Data frame sent\nI0205 21:22:05.382803 932 log.go:172] (0xc000acc000) Data frame received for 5\nI0205 21:22:05.382811 932 log.go:172] (0xc000a60000) (5) Data frame handling\nI0205 21:22:05.382819 932 log.go:172] (0xc000a60000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:22:05.472206 932 log.go:172] (0xc000acc000) Data frame received for 1\nI0205 21:22:05.472261 932 log.go:172] (0xc000a80320) (1) Data frame handling\nI0205 21:22:05.472281 932 log.go:172] (0xc000a80320) (1) Data frame sent\nI0205 21:22:05.472649 932 log.go:172] (0xc000acc000) (0xc000a803c0) Stream removed, broadcasting: 3\nI0205 21:22:05.472703 932 log.go:172] (0xc000acc000) (0xc000a80320) Stream removed, broadcasting: 1\nI0205 21:22:05.472748 932 log.go:172] (0xc000acc000) (0xc000a60000) Stream removed, broadcasting: 5\nI0205 21:22:05.472774 932 log.go:172] (0xc000acc000) Go away received\nI0205 21:22:05.473464 932 log.go:172] (0xc000acc000) (0xc000a80320) Stream removed, broadcasting: 1\nI0205 21:22:05.473512 932 log.go:172] (0xc000acc000) (0xc000a803c0) Stream removed, broadcasting: 3\nI0205 21:22:05.473523 932 log.go:172] (0xc000acc000) (0xc000a60000) Stream removed, broadcasting: 5\n" Feb 5 21:22:05.481: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:22:05.481: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:22:05.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8137 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:22:05.808: INFO: stderr: "I0205 21:22:05.629509 952 log.go:172] (0xc0009cfa20) (0xc000a48780) Create stream\nI0205 21:22:05.629697 952 log.go:172] (0xc0009cfa20) (0xc000a48780) Stream added, broadcasting: 1\nI0205 21:22:05.641387 952 log.go:172] (0xc0009cfa20) Reply frame received for 1\nI0205 21:22:05.641539 952 log.go:172] (0xc0009cfa20) (0xc00060c640) Create stream\nI0205 21:22:05.641562 952 log.go:172] (0xc0009cfa20) (0xc00060c640) Stream added, broadcasting: 3\nI0205 21:22:05.643456 952 log.go:172] (0xc0009cfa20) Reply frame received for 3\nI0205 21:22:05.643489 952 log.go:172] (0xc0009cfa20) (0xc0002a1400) Create stream\nI0205 21:22:05.643502 952 log.go:172] (0xc0009cfa20) (0xc0002a1400) Stream added, broadcasting: 5\nI0205 21:22:05.645659 952 log.go:172] (0xc0009cfa20) Reply frame received for 5\nI0205 21:22:05.729759 952 log.go:172] (0xc0009cfa20) Data frame received for 3\nI0205 21:22:05.729843 952 log.go:172] (0xc00060c640) (3) Data frame handling\nI0205 21:22:05.729865 952 log.go:172] (0xc00060c640) (3) Data frame sent\nI0205 21:22:05.729882 952 log.go:172] (0xc0009cfa20) Data frame received for 5\nI0205 21:22:05.729889 952 log.go:172] (0xc0002a1400) (5) Data frame handling\nI0205 21:22:05.729895 952 log.go:172] (0xc0002a1400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:22:05.795470 952 log.go:172] (0xc0009cfa20) Data frame received for 1\nI0205 21:22:05.795573 952 log.go:172] (0xc000a48780) (1) Data frame handling\nI0205 21:22:05.795599 952 log.go:172] (0xc000a48780) (1) Data frame sent\nI0205 21:22:05.795924 952 log.go:172] (0xc0009cfa20) (0xc000a48780) Stream removed, broadcasting: 1\nI0205 21:22:05.796184 952 log.go:172] (0xc0009cfa20) (0xc0002a1400) Stream removed, broadcasting: 5\nI0205 21:22:05.796304 952 log.go:172] (0xc0009cfa20) (0xc00060c640) Stream removed, broadcasting: 3\nI0205 21:22:05.796385 952 log.go:172] (0xc0009cfa20) Go away received\nI0205 21:22:05.797700 952 log.go:172] (0xc0009cfa20) (0xc000a48780) Stream removed, broadcasting: 1\nI0205 21:22:05.797718 952 log.go:172] (0xc0009cfa20) (0xc00060c640) Stream removed, broadcasting: 3\nI0205 21:22:05.797727 952 log.go:172] (0xc0009cfa20) (0xc0002a1400) Stream removed, broadcasting: 5\n" Feb 5 21:22:05.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:22:05.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:22:05.808: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 5 21:22:35.829: INFO: Deleting all statefulset in ns statefulset-8137 Feb 5 21:22:35.833: INFO: Scaling statefulset ss to 0 Feb 5 21:22:35.843: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:22:35.846: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:22:35.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8137" for this suite. • [SLOW TEST:107.002 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":52,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:22:35.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:22:36.802: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:22:38.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:22:40.866: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:22:42.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534556, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:22:45.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API Feb 5 21:22:45.925: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource definition that should be denied by the webhook Feb 5 21:22:46.053: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:22:46.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3514" for this suite. STEP: Destroying namespace "webhook-3514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.855 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":53,"skipped":961,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:22:47.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:22:48.830: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:22:50.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:22:52.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:22:54.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:22:56.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716534568, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:22:59.988: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:23:10.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4310" for this suite. STEP: Destroying namespace "webhook-4310-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.647 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":54,"skipped":971,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:23:10.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:23:10.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3009" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":55,"skipped":989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:23:10.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-add53f1d-1eb5-4d0f-905d-fcc28e627433 STEP: Creating secret with name s-test-opt-upd-ca1667fb-84ca-4ac2-bc0d-055a7d8b066d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-add53f1d-1eb5-4d0f-905d-fcc28e627433 STEP: Updating secret s-test-opt-upd-ca1667fb-84ca-4ac2-bc0d-055a7d8b066d STEP: Creating secret with name s-test-opt-create-245ec96a-9e84-4513-a08c-2779fa2b7c70 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:23:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1121" for this suite. • [SLOW TEST:16.373 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1027,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:23:26.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:23:26.994: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 5 21:23:29.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7977 create -f -' Feb 5 21:23:33.123: INFO: stderr: "" Feb 5 21:23:33.123: INFO: stdout: "e2e-test-crd-publish-openapi-6237-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 5 21:23:33.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7977 delete e2e-test-crd-publish-openapi-6237-crds test-cr' Feb 5 21:23:33.250: INFO: stderr: "" Feb 5 21:23:33.250: INFO: stdout: "e2e-test-crd-publish-openapi-6237-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 5 21:23:33.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7977 apply -f -' Feb 5 21:23:33.531: INFO: stderr: "" Feb 5 21:23:33.531: INFO: stdout: "e2e-test-crd-publish-openapi-6237-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 5 21:23:33.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7977 delete e2e-test-crd-publish-openapi-6237-crds test-cr' Feb 5 21:23:33.667: INFO: stderr: "" Feb 5 21:23:33.667: INFO: stdout: "e2e-test-crd-publish-openapi-6237-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 5 21:23:33.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6237-crds' Feb 5 21:23:33.974: INFO: stderr: "" Feb 5 21:23:33.974: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6237-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:23:36.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7977" for this suite. • [SLOW TEST:10.085 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":57,"skipped":1027,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:23:37.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-qv6r STEP: Creating a pod to test atomic-volume-subpath Feb 5 21:23:37.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qv6r" in namespace "subpath-3878" to be "success or failure" Feb 5 21:23:37.212: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Pending", Reason="", readiness=false. Elapsed: 45.72245ms Feb 5 21:23:39.218: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05153569s Feb 5 21:23:41.816: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.649507341s Feb 5 21:23:43.826: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.659795158s Feb 5 21:23:45.840: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 8.674096344s Feb 5 21:23:47.850: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 10.684073313s Feb 5 21:23:49.871: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 12.705189942s Feb 5 21:23:51.879: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 14.712903764s Feb 5 21:23:53.885: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 16.719324738s Feb 5 21:23:55.897: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 18.730835802s Feb 5 21:23:57.903: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 20.736902147s Feb 5 21:23:59.913: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 22.746748341s Feb 5 21:24:01.918: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 24.751864388s Feb 5 21:24:03.925: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Running", Reason="", readiness=true. Elapsed: 26.759461508s Feb 5 21:24:05.930: INFO: Pod "pod-subpath-test-downwardapi-qv6r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.764118057s STEP: Saw pod success Feb 5 21:24:05.930: INFO: Pod "pod-subpath-test-downwardapi-qv6r" satisfied condition "success or failure" Feb 5 21:24:05.932: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-qv6r container test-container-subpath-downwardapi-qv6r: STEP: delete the pod Feb 5 21:24:05.991: INFO: Waiting for pod pod-subpath-test-downwardapi-qv6r to disappear Feb 5 21:24:06.020: INFO: Pod pod-subpath-test-downwardapi-qv6r no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qv6r Feb 5 21:24:06.020: INFO: Deleting pod "pod-subpath-test-downwardapi-qv6r" in namespace "subpath-3878" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:24:06.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3878" for this suite. • [SLOW TEST:29.074 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":58,"skipped":1027,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:24:06.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Feb 5 21:24:06.826: INFO: created pod pod-service-account-defaultsa Feb 5 21:24:06.826: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 5 21:24:06.881: INFO: created pod pod-service-account-mountsa Feb 5 21:24:06.882: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 5 21:24:06.899: INFO: created pod pod-service-account-nomountsa Feb 5 21:24:06.900: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 5 21:24:07.012: INFO: created pod pod-service-account-defaultsa-mountspec Feb 5 21:24:07.012: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 5 21:24:07.029: INFO: created pod pod-service-account-mountsa-mountspec Feb 5 21:24:07.029: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 5 21:24:07.055: INFO: created pod pod-service-account-nomountsa-mountspec Feb 5 21:24:07.055: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 5 21:24:07.091: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 5 21:24:07.091: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 5 21:24:07.214: INFO: created pod pod-service-account-mountsa-nomountspec Feb 5 21:24:07.214: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 5 21:24:07.230: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 5 21:24:07.230: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:24:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1449" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":59,"skipped":1030,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:24:08.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:24:31.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5269" for this suite. • [SLOW TEST:22.698 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":1031,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:24:31.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:24:31.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426" in namespace "projected-6983" to be "success or failure" Feb 5 21:24:31.318: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426": Phase="Pending", Reason="", readiness=false. Elapsed: 28.831097ms Feb 5 21:24:33.326: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036835331s Feb 5 21:24:35.335: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045501634s Feb 5 21:24:37.364: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074442722s Feb 5 21:24:39.373: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082924555s STEP: Saw pod success Feb 5 21:24:39.373: INFO: Pod "downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426" satisfied condition "success or failure" Feb 5 21:24:39.376: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426 container client-container: STEP: delete the pod Feb 5 21:24:39.415: INFO: Waiting for pod downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426 to disappear Feb 5 21:24:39.512: INFO: Pod downwardapi-volume-a7697a23-cb1e-4a8e-99af-c023e9982426 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:24:39.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6983" for this suite. • [SLOW TEST:8.356 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1035,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:24:39.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-fe5d218a-4277-4e5e-9460-7160ce5f4e7f STEP: Creating a pod to test consume secrets Feb 5 21:24:39.756: INFO: Waiting up to 5m0s for pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c" in namespace "secrets-8169" to be "success or failure" Feb 5 21:24:39.770: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.988ms Feb 5 21:24:41.779: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022782307s Feb 5 21:24:43.791: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034240323s Feb 5 21:24:45.803: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047170006s Feb 5 21:24:47.812: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055421657s STEP: Saw pod success Feb 5 21:24:47.812: INFO: Pod "pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c" satisfied condition "success or failure" Feb 5 21:24:47.815: INFO: Trying to get logs from node jerma-node pod pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c container secret-env-test: STEP: delete the pod Feb 5 21:24:47.934: INFO: Waiting for pod pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c to disappear Feb 5 21:24:47.948: INFO: Pod pod-secrets-55ccc1f8-49d2-439c-9ccb-4230b448160c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:24:47.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8169" for this suite. • [SLOW TEST:8.440 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1046,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:24:47.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:24:48.147: INFO: Create a RollingUpdate DaemonSet Feb 5 21:24:48.227: INFO: Check that daemon pods launch on every node of the cluster Feb 5 21:24:48.278: INFO: Number of nodes with available pods: 0 Feb 5 21:24:48.278: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:50.713: INFO: Number of nodes with available pods: 0 Feb 5 21:24:50.713: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:51.617: INFO: Number of nodes with available pods: 0 Feb 5 21:24:51.617: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:52.292: INFO: Number of nodes with available pods: 0 Feb 5 21:24:52.292: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:53.339: INFO: Number of nodes with available pods: 0 Feb 5 21:24:53.339: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:54.295: INFO: Number of nodes with available pods: 0 Feb 5 21:24:54.295: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:56.963: INFO: Number of nodes with available pods: 0 Feb 5 21:24:56.963: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:57.554: INFO: Number of nodes with available pods: 0 Feb 5 21:24:57.554: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:24:58.299: INFO: Number of nodes with available pods: 1 Feb 5 21:24:58.299: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 5 21:24:59.295: INFO: Number of nodes with available pods: 1 Feb 5 21:24:59.295: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 5 21:25:00.293: INFO: Number of nodes with available pods: 2 Feb 5 21:25:00.293: INFO: Number of running nodes: 2, number of available pods: 2 Feb 5 21:25:00.293: INFO: Update the DaemonSet to trigger a rollout Feb 5 21:25:00.307: INFO: Updating DaemonSet daemon-set Feb 5 21:25:13.334: INFO: Roll back the DaemonSet before rollout is complete Feb 5 21:25:13.383: INFO: Updating DaemonSet daemon-set Feb 5 21:25:13.383: INFO: Make sure DaemonSet rollback is complete Feb 5 21:25:13.406: INFO: Wrong image for pod: daemon-set-qncj9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 5 21:25:13.406: INFO: Pod daemon-set-qncj9 is not available Feb 5 21:25:14.433: INFO: Wrong image for pod: daemon-set-qncj9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 5 21:25:14.433: INFO: Pod daemon-set-qncj9 is not available Feb 5 21:25:15.424: INFO: Wrong image for pod: daemon-set-qncj9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 5 21:25:15.424: INFO: Pod daemon-set-qncj9 is not available Feb 5 21:25:16.444: INFO: Wrong image for pod: daemon-set-qncj9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 5 21:25:16.444: INFO: Pod daemon-set-qncj9 is not available Feb 5 21:25:17.445: INFO: Wrong image for pod: daemon-set-qncj9. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 5 21:25:17.445: INFO: Pod daemon-set-qncj9 is not available Feb 5 21:25:18.425: INFO: Pod daemon-set-wj59p is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5730, will wait for the garbage collector to delete the pods Feb 5 21:25:18.511: INFO: Deleting DaemonSet.extensions daemon-set took: 12.504587ms Feb 5 21:25:18.911: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.598919ms Feb 5 21:25:33.219: INFO: Number of nodes with available pods: 0 Feb 5 21:25:33.219: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 21:25:33.224: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5730/daemonsets","resourceVersion":"6606527"},"items":null} Feb 5 21:25:33.228: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5730/pods","resourceVersion":"6606527"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:25:33.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5730" for this suite. • [SLOW TEST:45.305 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":63,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:25:33.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:26:01.467: INFO: Container started at 2020-02-05 21:25:38 +0000 UTC, pod became ready at 2020-02-05 21:26:00 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:26:01.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4544" for this suite. • [SLOW TEST:28.228 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1080,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:26:01.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Feb 5 21:26:01.599: INFO: Waiting up to 5m0s for pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6" in namespace "downward-api-9775" to be "success or failure" Feb 5 21:26:01.621: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.141368ms Feb 5 21:26:03.628: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029496164s Feb 5 21:26:05.637: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037786485s Feb 5 21:26:07.646: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047219001s Feb 5 21:26:09.653: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054424919s Feb 5 21:26:11.661: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062529656s STEP: Saw pod success Feb 5 21:26:11.662: INFO: Pod "downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6" satisfied condition "success or failure" Feb 5 21:26:11.667: INFO: Trying to get logs from node jerma-node pod downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6 container dapi-container: STEP: delete the pod Feb 5 21:26:11.773: INFO: Waiting for pod downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6 to disappear Feb 5 21:26:11.778: INFO: Pod downward-api-b7b56f40-ad45-41a0-9c93-e78cf25e97c6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:26:11.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9775" for this suite. • [SLOW TEST:10.318 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1083,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:26:11.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 5 21:26:11.994: INFO: Waiting up to 5m0s for pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3" in namespace "emptydir-3300" to be "success or failure" Feb 5 21:26:12.003: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040623ms Feb 5 21:26:14.010: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015282157s Feb 5 21:26:16.016: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021223915s Feb 5 21:26:18.027: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032921694s Feb 5 21:26:20.034: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039685448s STEP: Saw pod success Feb 5 21:26:20.034: INFO: Pod "pod-5885e98c-6346-4936-a807-4bb2993e5ef3" satisfied condition "success or failure" Feb 5 21:26:20.037: INFO: Trying to get logs from node jerma-node pod pod-5885e98c-6346-4936-a807-4bb2993e5ef3 container test-container: STEP: delete the pod Feb 5 21:26:20.120: INFO: Waiting for pod pod-5885e98c-6346-4936-a807-4bb2993e5ef3 to disappear Feb 5 21:26:20.125: INFO: Pod pod-5885e98c-6346-4936-a807-4bb2993e5ef3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:26:20.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3300" for this suite. • [SLOW TEST:8.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:26:20.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-eff210b7-6788-4627-a581-4424b5e4055c STEP: Creating a pod to test consume secrets Feb 5 21:26:20.297: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219" in namespace "projected-7417" to be "success or failure" Feb 5 21:26:20.302: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.919936ms Feb 5 21:26:22.306: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008897996s Feb 5 21:26:24.317: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020262872s Feb 5 21:26:26.330: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032799816s Feb 5 21:26:28.343: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045851243s STEP: Saw pod success Feb 5 21:26:28.343: INFO: Pod "pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219" satisfied condition "success or failure" Feb 5 21:26:28.347: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219 container projected-secret-volume-test: STEP: delete the pod Feb 5 21:26:28.413: INFO: Waiting for pod pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219 to disappear Feb 5 21:26:28.421: INFO: Pod pod-projected-secrets-d1f850ee-c504-49ef-8cc3-77fbf60a2219 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:26:28.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7417" for this suite. • [SLOW TEST:8.345 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1120,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:26:28.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:27:16.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8743" for this suite. • [SLOW TEST:47.628 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1121,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:27:16.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Feb 5 21:27:16.166: INFO: Created pod &Pod{ObjectMeta:{dns-7020 dns-7020 /api/v1/namespaces/dns-7020/pods/dns-7020 3484b6e7-0321-4fde-a612-50c88bcfaaa5 6606944 0 2020-02-05 21:27:16 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4zdln,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4zdln,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4zdln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Feb 5 21:27:22.179: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7020 PodName:dns-7020 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:27:22.179: INFO: >>> kubeConfig: /root/.kube/config I0205 21:27:22.219879 9 log.go:172] (0xc001ad6420) (0xc0019bae60) Create stream I0205 21:27:22.219975 9 log.go:172] (0xc001ad6420) (0xc0019bae60) Stream added, broadcasting: 1 I0205 21:27:22.223752 9 log.go:172] (0xc001ad6420) Reply frame received for 1 I0205 21:27:22.223846 9 log.go:172] (0xc001ad6420) (0xc001754000) Create stream I0205 21:27:22.223856 9 log.go:172] (0xc001ad6420) (0xc001754000) Stream added, broadcasting: 3 I0205 21:27:22.225001 9 log.go:172] (0xc001ad6420) Reply frame received for 3 I0205 21:27:22.225050 9 log.go:172] (0xc001ad6420) (0xc0019baf00) Create stream I0205 21:27:22.225062 9 log.go:172] (0xc001ad6420) (0xc0019baf00) Stream added, broadcasting: 5 I0205 21:27:22.228054 9 log.go:172] (0xc001ad6420) Reply frame received for 5 I0205 21:27:22.332680 9 log.go:172] (0xc001ad6420) Data frame received for 3 I0205 21:27:22.332759 9 log.go:172] (0xc001754000) (3) Data frame handling I0205 21:27:22.332790 9 log.go:172] (0xc001754000) (3) Data frame sent I0205 21:27:22.437816 9 log.go:172] (0xc001ad6420) Data frame received for 1 I0205 21:27:22.438043 9 log.go:172] (0xc001ad6420) (0xc0019baf00) Stream removed, broadcasting: 5 I0205 21:27:22.438097 9 log.go:172] (0xc0019bae60) (1) Data frame handling I0205 21:27:22.438136 9 log.go:172] (0xc0019bae60) (1) Data frame sent I0205 21:27:22.438214 9 log.go:172] (0xc001ad6420) (0xc001754000) Stream removed, broadcasting: 3 I0205 21:27:22.438674 9 log.go:172] (0xc001ad6420) (0xc0019bae60) Stream removed, broadcasting: 1 I0205 21:27:22.438848 9 log.go:172] (0xc001ad6420) Go away received I0205 21:27:22.439092 9 log.go:172] (0xc001ad6420) (0xc0019bae60) Stream removed, broadcasting: 1 I0205 21:27:22.439118 9 log.go:172] (0xc001ad6420) (0xc001754000) Stream removed, broadcasting: 3 I0205 21:27:22.439130 9 log.go:172] (0xc001ad6420) (0xc0019baf00) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Feb 5 21:27:22.439: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7020 PodName:dns-7020 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:27:22.439: INFO: >>> kubeConfig: /root/.kube/config I0205 21:27:22.480178 9 log.go:172] (0xc002182160) (0xc001dfce60) Create stream I0205 21:27:22.480375 9 log.go:172] (0xc002182160) (0xc001dfce60) Stream added, broadcasting: 1 I0205 21:27:22.485492 9 log.go:172] (0xc002182160) Reply frame received for 1 I0205 21:27:22.485644 9 log.go:172] (0xc002182160) (0xc001dfcf00) Create stream I0205 21:27:22.485687 9 log.go:172] (0xc002182160) (0xc001dfcf00) Stream added, broadcasting: 3 I0205 21:27:22.491006 9 log.go:172] (0xc002182160) Reply frame received for 3 I0205 21:27:22.491047 9 log.go:172] (0xc002182160) (0xc001ee0320) Create stream I0205 21:27:22.491054 9 log.go:172] (0xc002182160) (0xc001ee0320) Stream added, broadcasting: 5 I0205 21:27:22.493521 9 log.go:172] (0xc002182160) Reply frame received for 5 I0205 21:27:22.591775 9 log.go:172] (0xc002182160) Data frame received for 3 I0205 21:27:22.591980 9 log.go:172] (0xc001dfcf00) (3) Data frame handling I0205 21:27:22.592005 9 log.go:172] (0xc001dfcf00) (3) Data frame sent I0205 21:27:22.739803 9 log.go:172] (0xc002182160) Data frame received for 1 I0205 21:27:22.740148 9 log.go:172] (0xc001dfce60) (1) Data frame handling I0205 21:27:22.740182 9 log.go:172] (0xc001dfce60) (1) Data frame sent I0205 21:27:22.740332 9 log.go:172] (0xc002182160) (0xc001dfce60) Stream removed, broadcasting: 1 I0205 21:27:22.741337 9 log.go:172] (0xc002182160) (0xc001dfcf00) Stream removed, broadcasting: 3 I0205 21:27:22.741640 9 log.go:172] (0xc002182160) (0xc001ee0320) Stream removed, broadcasting: 5 I0205 21:27:22.741711 9 log.go:172] (0xc002182160) (0xc001dfce60) Stream removed, broadcasting: 1 I0205 21:27:22.741729 9 log.go:172] (0xc002182160) (0xc001dfcf00) Stream removed, broadcasting: 3 I0205 21:27:22.741739 9 log.go:172] (0xc002182160) (0xc001ee0320) Stream removed, broadcasting: 5 Feb 5 21:27:22.742: INFO: Deleting pod dns-7020... I0205 21:27:22.742497 9 log.go:172] (0xc002182160) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:27:22.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7020" for this suite. • [SLOW TEST:6.705 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":69,"skipped":1134,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:27:22.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a64af45c-d5fa-4ae8-bcbe-185b8a70fa1d STEP: Creating configMap with name cm-test-opt-upd-bd3120b3-c1b8-41d2-9707-d1f9106a9304 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a64af45c-d5fa-4ae8-bcbe-185b8a70fa1d STEP: Updating configmap cm-test-opt-upd-bd3120b3-c1b8-41d2-9707-d1f9106a9304 STEP: Creating configMap with name cm-test-opt-create-d44d49cf-f089-4f18-8fbf-39e2bd3063ac STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:27:39.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3545" for this suite. • [SLOW TEST:16.415 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1139,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:27:39.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 5 21:27:39.318: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:27:55.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9003" for this suite. • [SLOW TEST:16.022 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":71,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:27:55.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 5 21:27:55.434: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-418 /api/v1/namespaces/watch-418/configmaps/e2e-watch-test-resource-version 0ae15e2f-9275-45b2-9b1f-611338d6364f 6607142 0 2020-02-05 21:27:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 5 21:27:55.434: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-418 /api/v1/namespaces/watch-418/configmaps/e2e-watch-test-resource-version 0ae15e2f-9275-45b2-9b1f-611338d6364f 6607143 0 2020-02-05 21:27:55 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:27:55.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-418" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":72,"skipped":1158,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:27:55.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 5 21:28:11.732: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 5 21:28:11.782: INFO: Pod pod-with-poststart-http-hook still exists Feb 5 21:28:13.782: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 5 21:28:13.791: INFO: Pod pod-with-poststart-http-hook still exists Feb 5 21:28:15.782: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 5 21:28:15.796: INFO: Pod pod-with-poststart-http-hook still exists Feb 5 21:28:17.782: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 5 21:28:17.795: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:28:17.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4172" for this suite. • [SLOW TEST:22.351 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1178,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:28:17.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac Feb 5 21:28:17.982: INFO: Pod name my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac: Found 0 pods out of 1 Feb 5 21:28:23.006: INFO: Pod name my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac: Found 1 pods out of 1 Feb 5 21:28:23.006: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac" are running Feb 5 21:28:25.027: INFO: Pod "my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac-mrl8v" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:28:18 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:28:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:28:18 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:28:17 +0000 UTC Reason: Message:}]) Feb 5 21:28:25.027: INFO: Trying to dial the pod Feb 5 21:28:30.043: INFO: Controller my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac: Got expected result from replica 1 [my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac-mrl8v]: "my-hostname-basic-c139a114-7bf9-4b77-b388-cac1a976baac-mrl8v", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:28:30.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6383" for this suite. • [SLOW TEST:12.227 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":74,"skipped":1192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:28:30.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3465 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3465 I0205 21:28:30.385001 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3465, replica count: 2 I0205 21:28:33.435910 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:28:36.436595 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:28:39.437225 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:28:42.437728 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 5 21:28:42.437: INFO: Creating new exec pod Feb 5 21:28:51.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpod8tjgt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 5 21:28:51.900: INFO: stderr: "I0205 21:28:51.653507 1077 log.go:172] (0xc00074abb0) (0xc0007bda40) Create stream\nI0205 21:28:51.653666 1077 log.go:172] (0xc00074abb0) (0xc0007bda40) Stream added, broadcasting: 1\nI0205 21:28:51.657455 1077 log.go:172] (0xc00074abb0) Reply frame received for 1\nI0205 21:28:51.657513 1077 log.go:172] (0xc00074abb0) (0xc000704280) Create stream\nI0205 21:28:51.657534 1077 log.go:172] (0xc00074abb0) (0xc000704280) Stream added, broadcasting: 3\nI0205 21:28:51.664499 1077 log.go:172] (0xc00074abb0) Reply frame received for 3\nI0205 21:28:51.664649 1077 log.go:172] (0xc00074abb0) (0xc000884000) Create stream\nI0205 21:28:51.664665 1077 log.go:172] (0xc00074abb0) (0xc000884000) Stream added, broadcasting: 5\nI0205 21:28:51.668329 1077 log.go:172] (0xc00074abb0) Reply frame received for 5\nI0205 21:28:51.761265 1077 log.go:172] (0xc00074abb0) Data frame received for 5\nI0205 21:28:51.761316 1077 log.go:172] (0xc000884000) (5) Data frame handling\nI0205 21:28:51.761348 1077 log.go:172] (0xc000884000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0205 21:28:51.772303 1077 log.go:172] (0xc00074abb0) Data frame received for 5\nI0205 21:28:51.772375 1077 log.go:172] (0xc000884000) (5) Data frame handling\nI0205 21:28:51.772426 1077 log.go:172] (0xc000884000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0205 21:28:51.890041 1077 log.go:172] (0xc00074abb0) (0xc000704280) Stream removed, broadcasting: 3\nI0205 21:28:51.890418 1077 log.go:172] (0xc00074abb0) Data frame received for 1\nI0205 21:28:51.890443 1077 log.go:172] (0xc0007bda40) (1) Data frame handling\nI0205 21:28:51.890458 1077 log.go:172] (0xc00074abb0) (0xc000884000) Stream removed, broadcasting: 5\nI0205 21:28:51.890489 1077 log.go:172] (0xc0007bda40) (1) Data frame sent\nI0205 21:28:51.890506 1077 log.go:172] (0xc00074abb0) (0xc0007bda40) Stream removed, broadcasting: 1\nI0205 21:28:51.890518 1077 log.go:172] (0xc00074abb0) Go away received\nI0205 21:28:51.891777 1077 log.go:172] (0xc00074abb0) (0xc0007bda40) Stream removed, broadcasting: 1\nI0205 21:28:51.891885 1077 log.go:172] (0xc00074abb0) (0xc000704280) Stream removed, broadcasting: 3\nI0205 21:28:51.891899 1077 log.go:172] (0xc00074abb0) (0xc000884000) Stream removed, broadcasting: 5\n" Feb 5 21:28:51.900: INFO: stdout: "" Feb 5 21:28:51.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3465 execpod8tjgt -- /bin/sh -x -c nc -zv -t -w 2 10.96.220.103 80' Feb 5 21:28:52.334: INFO: stderr: "I0205 21:28:52.130771 1094 log.go:172] (0xc0003c00b0) (0xc0007034a0) Create stream\nI0205 21:28:52.131147 1094 log.go:172] (0xc0003c00b0) (0xc0007034a0) Stream added, broadcasting: 1\nI0205 21:28:52.135641 1094 log.go:172] (0xc0003c00b0) Reply frame received for 1\nI0205 21:28:52.135711 1094 log.go:172] (0xc0003c00b0) (0xc0006aba40) Create stream\nI0205 21:28:52.135730 1094 log.go:172] (0xc0003c00b0) (0xc0006aba40) Stream added, broadcasting: 3\nI0205 21:28:52.137255 1094 log.go:172] (0xc0003c00b0) Reply frame received for 3\nI0205 21:28:52.137292 1094 log.go:172] (0xc0003c00b0) (0xc0008ea000) Create stream\nI0205 21:28:52.137305 1094 log.go:172] (0xc0003c00b0) (0xc0008ea000) Stream added, broadcasting: 5\nI0205 21:28:52.139187 1094 log.go:172] (0xc0003c00b0) Reply frame received for 5\nI0205 21:28:52.235978 1094 log.go:172] (0xc0003c00b0) Data frame received for 5\nI0205 21:28:52.236058 1094 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0205 21:28:52.236093 1094 log.go:172] (0xc0008ea000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.220.103 80\nI0205 21:28:52.244509 1094 log.go:172] (0xc0003c00b0) Data frame received for 5\nI0205 21:28:52.244544 1094 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0205 21:28:52.244569 1094 log.go:172] (0xc0008ea000) (5) Data frame sent\nConnection to 10.96.220.103 80 port [tcp/http] succeeded!\nI0205 21:28:52.326050 1094 log.go:172] (0xc0003c00b0) (0xc0008ea000) Stream removed, broadcasting: 5\nI0205 21:28:52.326189 1094 log.go:172] (0xc0003c00b0) Data frame received for 1\nI0205 21:28:52.326217 1094 log.go:172] (0xc0003c00b0) (0xc0006aba40) Stream removed, broadcasting: 3\nI0205 21:28:52.326250 1094 log.go:172] (0xc0007034a0) (1) Data frame handling\nI0205 21:28:52.326274 1094 log.go:172] (0xc0007034a0) (1) Data frame sent\nI0205 21:28:52.326280 1094 log.go:172] (0xc0003c00b0) (0xc0007034a0) Stream removed, broadcasting: 1\nI0205 21:28:52.326298 1094 log.go:172] (0xc0003c00b0) Go away received\nI0205 21:28:52.327245 1094 log.go:172] (0xc0003c00b0) (0xc0007034a0) Stream removed, broadcasting: 1\nI0205 21:28:52.327328 1094 log.go:172] (0xc0003c00b0) (0xc0006aba40) Stream removed, broadcasting: 3\nI0205 21:28:52.327365 1094 log.go:172] (0xc0003c00b0) (0xc0008ea000) Stream removed, broadcasting: 5\n" Feb 5 21:28:52.334: INFO: stdout: "" Feb 5 21:28:52.334: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:28:52.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3465" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.405 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":75,"skipped":1234,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:28:52.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3212 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-3212 Feb 5 21:28:52.718: INFO: Found 0 stateful pods, waiting for 1 Feb 5 21:29:02.725: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 5 21:29:02.786: INFO: Deleting all statefulset in ns statefulset-3212 Feb 5 21:29:02.955: INFO: Scaling statefulset ss to 0 Feb 5 21:29:14.728: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:29:14.732: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:29:14.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3212" for this suite. • [SLOW TEST:22.325 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":76,"skipped":1243,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:29:14.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Feb 5 21:29:14.927: INFO: Waiting up to 5m0s for pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9" in namespace "var-expansion-5292" to be "success or failure" Feb 5 21:29:14.950: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.180589ms Feb 5 21:29:16.960: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033010001s Feb 5 21:29:18.966: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039547849s Feb 5 21:29:20.974: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047535322s Feb 5 21:29:22.981: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05373426s STEP: Saw pod success Feb 5 21:29:22.981: INFO: Pod "var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9" satisfied condition "success or failure" Feb 5 21:29:22.985: INFO: Trying to get logs from node jerma-node pod var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9 container dapi-container: STEP: delete the pod Feb 5 21:29:23.222: INFO: Waiting for pod var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9 to disappear Feb 5 21:29:23.240: INFO: Pod var-expansion-53d50def-1594-4dce-b2b4-77dee3cc35d9 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:29:23.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5292" for this suite. • [SLOW TEST:8.531 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1244,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:29:23.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:29:23.484: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 5 21:29:26.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 create -f -' Feb 5 21:29:29.331: INFO: stderr: "" Feb 5 21:29:29.332: INFO: stdout: "e2e-test-crd-publish-openapi-337-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 5 21:29:29.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 delete e2e-test-crd-publish-openapi-337-crds test-cr' Feb 5 21:29:29.499: INFO: stderr: "" Feb 5 21:29:29.499: INFO: stdout: "e2e-test-crd-publish-openapi-337-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 5 21:29:29.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 apply -f -' Feb 5 21:29:29.780: INFO: stderr: "" Feb 5 21:29:29.780: INFO: stdout: "e2e-test-crd-publish-openapi-337-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 5 21:29:29.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 delete e2e-test-crd-publish-openapi-337-crds test-cr' Feb 5 21:29:29.916: INFO: stderr: "" Feb 5 21:29:29.916: INFO: stdout: "e2e-test-crd-publish-openapi-337-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 5 21:29:29.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-337-crds' Feb 5 21:29:30.226: INFO: stderr: "" Feb 5 21:29:30.226: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-337-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:29:33.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8940" for this suite. • [SLOW TEST:9.829 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":78,"skipped":1253,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:29:33.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-774 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 5 21:29:33.229: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 5 21:30:07.489: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-774 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:30:07.490: INFO: >>> kubeConfig: /root/.kube/config I0205 21:30:07.557839 9 log.go:172] (0xc002182370) (0xc00235ebe0) Create stream I0205 21:30:07.558008 9 log.go:172] (0xc002182370) (0xc00235ebe0) Stream added, broadcasting: 1 I0205 21:30:07.562772 9 log.go:172] (0xc002182370) Reply frame received for 1 I0205 21:30:07.562936 9 log.go:172] (0xc002182370) (0xc0019ba000) Create stream I0205 21:30:07.562973 9 log.go:172] (0xc002182370) (0xc0019ba000) Stream added, broadcasting: 3 I0205 21:30:07.564855 9 log.go:172] (0xc002182370) Reply frame received for 3 I0205 21:30:07.564894 9 log.go:172] (0xc002182370) (0xc001dfd400) Create stream I0205 21:30:07.564904 9 log.go:172] (0xc002182370) (0xc001dfd400) Stream added, broadcasting: 5 I0205 21:30:07.566528 9 log.go:172] (0xc002182370) Reply frame received for 5 I0205 21:30:07.745140 9 log.go:172] (0xc002182370) Data frame received for 3 I0205 21:30:07.745345 9 log.go:172] (0xc0019ba000) (3) Data frame handling I0205 21:30:07.745403 9 log.go:172] (0xc0019ba000) (3) Data frame sent I0205 21:30:07.831169 9 log.go:172] (0xc002182370) (0xc001dfd400) Stream removed, broadcasting: 5 I0205 21:30:07.831275 9 log.go:172] (0xc002182370) (0xc0019ba000) Stream removed, broadcasting: 3 I0205 21:30:07.831363 9 log.go:172] (0xc002182370) Data frame received for 1 I0205 21:30:07.831382 9 log.go:172] (0xc00235ebe0) (1) Data frame handling I0205 21:30:07.831502 9 log.go:172] (0xc00235ebe0) (1) Data frame sent I0205 21:30:07.831521 9 log.go:172] (0xc002182370) (0xc00235ebe0) Stream removed, broadcasting: 1 I0205 21:30:07.831543 9 log.go:172] (0xc002182370) Go away received I0205 21:30:07.831649 9 log.go:172] (0xc002182370) (0xc00235ebe0) Stream removed, broadcasting: 1 I0205 21:30:07.831662 9 log.go:172] (0xc002182370) (0xc0019ba000) Stream removed, broadcasting: 3 I0205 21:30:07.831669 9 log.go:172] (0xc002182370) (0xc001dfd400) Stream removed, broadcasting: 5 Feb 5 21:30:07.831: INFO: Waiting for responses: map[] Feb 5 21:30:07.839: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-774 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:30:07.839: INFO: >>> kubeConfig: /root/.kube/config I0205 21:30:07.900201 9 log.go:172] (0xc001d3e580) (0xc0019ba6e0) Create stream I0205 21:30:07.900449 9 log.go:172] (0xc001d3e580) (0xc0019ba6e0) Stream added, broadcasting: 1 I0205 21:30:07.904756 9 log.go:172] (0xc001d3e580) Reply frame received for 1 I0205 21:30:07.904826 9 log.go:172] (0xc001d3e580) (0xc001d88fa0) Create stream I0205 21:30:07.904861 9 log.go:172] (0xc001d3e580) (0xc001d88fa0) Stream added, broadcasting: 3 I0205 21:30:07.906119 9 log.go:172] (0xc001d3e580) Reply frame received for 3 I0205 21:30:07.906145 9 log.go:172] (0xc001d3e580) (0xc00235ed20) Create stream I0205 21:30:07.906155 9 log.go:172] (0xc001d3e580) (0xc00235ed20) Stream added, broadcasting: 5 I0205 21:30:07.909805 9 log.go:172] (0xc001d3e580) Reply frame received for 5 I0205 21:30:07.994154 9 log.go:172] (0xc001d3e580) Data frame received for 3 I0205 21:30:07.994232 9 log.go:172] (0xc001d88fa0) (3) Data frame handling I0205 21:30:07.994246 9 log.go:172] (0xc001d88fa0) (3) Data frame sent I0205 21:30:08.051594 9 log.go:172] (0xc001d3e580) (0xc00235ed20) Stream removed, broadcasting: 5 I0205 21:30:08.051703 9 log.go:172] (0xc001d3e580) Data frame received for 1 I0205 21:30:08.051744 9 log.go:172] (0xc001d3e580) (0xc001d88fa0) Stream removed, broadcasting: 3 I0205 21:30:08.051788 9 log.go:172] (0xc0019ba6e0) (1) Data frame handling I0205 21:30:08.051823 9 log.go:172] (0xc0019ba6e0) (1) Data frame sent I0205 21:30:08.051841 9 log.go:172] (0xc001d3e580) (0xc0019ba6e0) Stream removed, broadcasting: 1 I0205 21:30:08.051858 9 log.go:172] (0xc001d3e580) Go away received I0205 21:30:08.052365 9 log.go:172] (0xc001d3e580) (0xc0019ba6e0) Stream removed, broadcasting: 1 I0205 21:30:08.052383 9 log.go:172] (0xc001d3e580) (0xc001d88fa0) Stream removed, broadcasting: 3 I0205 21:30:08.052389 9 log.go:172] (0xc001d3e580) (0xc00235ed20) Stream removed, broadcasting: 5 Feb 5 21:30:08.052: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:30:08.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-774" for this suite. • [SLOW TEST:34.929 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1264,"failed":0} SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:30:08.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 5 21:30:22.781: INFO: Successfully updated pod "pod-update-13c9461e-00e9-4b79-b1d8-c4da9f9d623a" STEP: verifying the updated pod is in kubernetes Feb 5 21:30:22.795: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:30:22.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2327" for this suite. • [SLOW TEST:14.743 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1267,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:30:22.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Feb 5 21:30:22.921: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Feb 5 21:30:33.488: INFO: >>> kubeConfig: /root/.kube/config Feb 5 21:30:36.431: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:30:47.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8290" for this suite. • [SLOW TEST:24.297 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":81,"skipped":1268,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:30:47.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 21:30:47.184: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660" in namespace "downward-api-3330" to be "success or failure" Feb 5 21:30:47.194: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660": Phase="Pending", Reason="", readiness=false. Elapsed: 9.70527ms Feb 5 21:30:49.246: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061662934s Feb 5 21:30:51.254: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069931541s Feb 5 21:30:53.264: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079819405s Feb 5 21:30:55.368: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184106875s STEP: Saw pod success Feb 5 21:30:55.368: INFO: Pod "downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660" satisfied condition "success or failure" Feb 5 21:30:55.373: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660 container client-container: STEP: delete the pod Feb 5 21:30:55.437: INFO: Waiting for pod downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660 to disappear Feb 5 21:30:55.587: INFO: Pod downwardapi-volume-c4eb098b-dc4b-4c54-a83f-73377ce84660 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:30:55.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3330" for this suite. • [SLOW TEST:8.486 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1283,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:30:55.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8418 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Feb 5 21:30:55.770: INFO: Found 0 stateful pods, waiting for 3 Feb 5 21:31:05.776: INFO: Found 2 stateful pods, waiting for 3 Feb 5 21:31:15.782: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:31:15.782: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:31:15.782: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 5 21:31:25.783: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:31:25.783: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:31:25.783: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:31:25.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8418 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:31:26.197: INFO: stderr: "I0205 21:31:25.999337 1216 log.go:172] (0xc0000ece70) (0xc0006341e0) Create stream\nI0205 21:31:25.999606 1216 log.go:172] (0xc0000ece70) (0xc0006341e0) Stream added, broadcasting: 1\nI0205 21:31:26.004524 1216 log.go:172] (0xc0000ece70) Reply frame received for 1\nI0205 21:31:26.004603 1216 log.go:172] (0xc0000ece70) (0xc000634280) Create stream\nI0205 21:31:26.004617 1216 log.go:172] (0xc0000ece70) (0xc000634280) Stream added, broadcasting: 3\nI0205 21:31:26.005916 1216 log.go:172] (0xc0000ece70) Reply frame received for 3\nI0205 21:31:26.005962 1216 log.go:172] (0xc0000ece70) (0xc00049f5e0) Create stream\nI0205 21:31:26.005985 1216 log.go:172] (0xc0000ece70) (0xc00049f5e0) Stream added, broadcasting: 5\nI0205 21:31:26.008806 1216 log.go:172] (0xc0000ece70) Reply frame received for 5\nI0205 21:31:26.071235 1216 log.go:172] (0xc0000ece70) Data frame received for 5\nI0205 21:31:26.071284 1216 log.go:172] (0xc00049f5e0) (5) Data frame handling\nI0205 21:31:26.071308 1216 log.go:172] (0xc00049f5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:31:26.093829 1216 log.go:172] (0xc0000ece70) Data frame received for 3\nI0205 21:31:26.093846 1216 log.go:172] (0xc000634280) (3) Data frame handling\nI0205 21:31:26.093858 1216 log.go:172] (0xc000634280) (3) Data frame sent\nI0205 21:31:26.188181 1216 log.go:172] (0xc0000ece70) Data frame received for 1\nI0205 21:31:26.188283 1216 log.go:172] (0xc0006341e0) (1) Data frame handling\nI0205 21:31:26.188310 1216 log.go:172] (0xc0006341e0) (1) Data frame sent\nI0205 21:31:26.189135 1216 log.go:172] (0xc0000ece70) (0xc0006341e0) Stream removed, broadcasting: 1\nI0205 21:31:26.189982 1216 log.go:172] (0xc0000ece70) (0xc000634280) Stream removed, broadcasting: 3\nI0205 21:31:26.190313 1216 log.go:172] (0xc0000ece70) (0xc00049f5e0) Stream removed, broadcasting: 5\nI0205 21:31:26.190351 1216 log.go:172] (0xc0000ece70) (0xc0006341e0) Stream removed, broadcasting: 1\nI0205 21:31:26.190373 1216 log.go:172] (0xc0000ece70) (0xc000634280) Stream removed, broadcasting: 3\nI0205 21:31:26.190384 1216 log.go:172] (0xc0000ece70) (0xc00049f5e0) Stream removed, broadcasting: 5\n" Feb 5 21:31:26.198: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:31:26.198: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Feb 5 21:31:36.243: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Feb 5 21:31:46.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8418 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:31:46.691: INFO: stderr: "I0205 21:31:46.468811 1236 log.go:172] (0xc000c08e70) (0xc000705f40) Create stream\nI0205 21:31:46.469168 1236 log.go:172] (0xc000c08e70) (0xc000705f40) Stream added, broadcasting: 1\nI0205 21:31:46.473654 1236 log.go:172] (0xc000c08e70) Reply frame received for 1\nI0205 21:31:46.473707 1236 log.go:172] (0xc000c08e70) (0xc000b66320) Create stream\nI0205 21:31:46.473726 1236 log.go:172] (0xc000c08e70) (0xc000b66320) Stream added, broadcasting: 3\nI0205 21:31:46.475735 1236 log.go:172] (0xc000c08e70) Reply frame received for 3\nI0205 21:31:46.475768 1236 log.go:172] (0xc000c08e70) (0xc000ae60a0) Create stream\nI0205 21:31:46.475775 1236 log.go:172] (0xc000c08e70) (0xc000ae60a0) Stream added, broadcasting: 5\nI0205 21:31:46.478632 1236 log.go:172] (0xc000c08e70) Reply frame received for 5\nI0205 21:31:46.563638 1236 log.go:172] (0xc000c08e70) Data frame received for 3\nI0205 21:31:46.564332 1236 log.go:172] (0xc000c08e70) Data frame received for 5\nI0205 21:31:46.564557 1236 log.go:172] (0xc000ae60a0) (5) Data frame handling\nI0205 21:31:46.564629 1236 log.go:172] (0xc000ae60a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:31:46.564689 1236 log.go:172] (0xc000b66320) (3) Data frame handling\nI0205 21:31:46.564702 1236 log.go:172] (0xc000b66320) (3) Data frame sent\nI0205 21:31:46.681108 1236 log.go:172] (0xc000c08e70) (0xc000b66320) Stream removed, broadcasting: 3\nI0205 21:31:46.681366 1236 log.go:172] (0xc000c08e70) Data frame received for 1\nI0205 21:31:46.681487 1236 log.go:172] (0xc000c08e70) (0xc000ae60a0) Stream removed, broadcasting: 5\nI0205 21:31:46.681545 1236 log.go:172] (0xc000705f40) (1) Data frame handling\nI0205 21:31:46.681592 1236 log.go:172] (0xc000705f40) (1) Data frame sent\nI0205 21:31:46.681604 1236 log.go:172] (0xc000c08e70) (0xc000705f40) Stream removed, broadcasting: 1\nI0205 21:31:46.681615 1236 log.go:172] (0xc000c08e70) Go away received\nI0205 21:31:46.682317 1236 log.go:172] (0xc000c08e70) (0xc000705f40) Stream removed, broadcasting: 1\nI0205 21:31:46.682330 1236 log.go:172] (0xc000c08e70) (0xc000b66320) Stream removed, broadcasting: 3\nI0205 21:31:46.682339 1236 log.go:172] (0xc000c08e70) (0xc000ae60a0) Stream removed, broadcasting: 5\n" Feb 5 21:31:46.691: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:31:46.691: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:31:56.724: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:31:56.724: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:31:56.724: INFO: Waiting for Pod statefulset-8418/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:31:56.724: INFO: Waiting for Pod statefulset-8418/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:06.741: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:32:06.741: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:06.741: INFO: Waiting for Pod statefulset-8418/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:16.735: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:32:16.735: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:16.735: INFO: Waiting for Pod statefulset-8418/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:26.736: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:32:26.736: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:36.735: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:32:36.735: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Feb 5 21:32:46.741: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update STEP: Rolling back to a previous revision Feb 5 21:32:56.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8418 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:32:57.172: INFO: stderr: "I0205 21:32:56.969656 1256 log.go:172] (0xc000b16bb0) (0xc000b02280) Create stream\nI0205 21:32:56.969880 1256 log.go:172] (0xc000b16bb0) (0xc000b02280) Stream added, broadcasting: 1\nI0205 21:32:56.973316 1256 log.go:172] (0xc000b16bb0) Reply frame received for 1\nI0205 21:32:56.973363 1256 log.go:172] (0xc000b16bb0) (0xc000afc280) Create stream\nI0205 21:32:56.973377 1256 log.go:172] (0xc000b16bb0) (0xc000afc280) Stream added, broadcasting: 3\nI0205 21:32:56.974770 1256 log.go:172] (0xc000b16bb0) Reply frame received for 3\nI0205 21:32:56.974810 1256 log.go:172] (0xc000b16bb0) (0xc000a86000) Create stream\nI0205 21:32:56.974824 1256 log.go:172] (0xc000b16bb0) (0xc000a86000) Stream added, broadcasting: 5\nI0205 21:32:56.976380 1256 log.go:172] (0xc000b16bb0) Reply frame received for 5\nI0205 21:32:57.057603 1256 log.go:172] (0xc000b16bb0) Data frame received for 5\nI0205 21:32:57.057718 1256 log.go:172] (0xc000a86000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:32:57.057930 1256 log.go:172] (0xc000a86000) (5) Data frame sent\nI0205 21:32:57.084343 1256 log.go:172] (0xc000b16bb0) Data frame received for 3\nI0205 21:32:57.084397 1256 log.go:172] (0xc000afc280) (3) Data frame handling\nI0205 21:32:57.084420 1256 log.go:172] (0xc000afc280) (3) Data frame sent\nI0205 21:32:57.163895 1256 log.go:172] (0xc000b16bb0) (0xc000a86000) Stream removed, broadcasting: 5\nI0205 21:32:57.164038 1256 log.go:172] (0xc000b16bb0) Data frame received for 1\nI0205 21:32:57.164079 1256 log.go:172] (0xc000b16bb0) (0xc000afc280) Stream removed, broadcasting: 3\nI0205 21:32:57.164123 1256 log.go:172] (0xc000b02280) (1) Data frame handling\nI0205 21:32:57.164156 1256 log.go:172] (0xc000b02280) (1) Data frame sent\nI0205 21:32:57.164166 1256 log.go:172] (0xc000b16bb0) (0xc000b02280) Stream removed, broadcasting: 1\nI0205 21:32:57.164183 1256 log.go:172] (0xc000b16bb0) Go away received\nI0205 21:32:57.165180 1256 log.go:172] (0xc000b16bb0) (0xc000b02280) Stream removed, broadcasting: 1\nI0205 21:32:57.165193 1256 log.go:172] (0xc000b16bb0) (0xc000afc280) Stream removed, broadcasting: 3\nI0205 21:32:57.165197 1256 log.go:172] (0xc000b16bb0) (0xc000a86000) Stream removed, broadcasting: 5\n" Feb 5 21:32:57.173: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:32:57.173: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:33:07.269: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Feb 5 21:33:17.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8418 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:33:17.705: INFO: stderr: "I0205 21:33:17.540493 1275 log.go:172] (0xc000a55340) (0xc000986640) Create stream\nI0205 21:33:17.540663 1275 log.go:172] (0xc000a55340) (0xc000986640) Stream added, broadcasting: 1\nI0205 21:33:17.551091 1275 log.go:172] (0xc000a55340) Reply frame received for 1\nI0205 21:33:17.551384 1275 log.go:172] (0xc000a55340) (0xc00047d4a0) Create stream\nI0205 21:33:17.551496 1275 log.go:172] (0xc000a55340) (0xc00047d4a0) Stream added, broadcasting: 3\nI0205 21:33:17.553675 1275 log.go:172] (0xc000a55340) Reply frame received for 3\nI0205 21:33:17.553751 1275 log.go:172] (0xc000a55340) (0xc0008fe000) Create stream\nI0205 21:33:17.553793 1275 log.go:172] (0xc000a55340) (0xc0008fe000) Stream added, broadcasting: 5\nI0205 21:33:17.555294 1275 log.go:172] (0xc000a55340) Reply frame received for 5\nI0205 21:33:17.618267 1275 log.go:172] (0xc000a55340) Data frame received for 5\nI0205 21:33:17.618337 1275 log.go:172] (0xc0008fe000) (5) Data frame handling\nI0205 21:33:17.618371 1275 log.go:172] (0xc0008fe000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:33:17.618541 1275 log.go:172] (0xc000a55340) Data frame received for 3\nI0205 21:33:17.618612 1275 log.go:172] (0xc00047d4a0) (3) Data frame handling\nI0205 21:33:17.618637 1275 log.go:172] (0xc00047d4a0) (3) Data frame sent\nI0205 21:33:17.697547 1275 log.go:172] (0xc000a55340) Data frame received for 1\nI0205 21:33:17.697655 1275 log.go:172] (0xc000986640) (1) Data frame handling\nI0205 21:33:17.697683 1275 log.go:172] (0xc000986640) (1) Data frame sent\nI0205 21:33:17.697834 1275 log.go:172] (0xc000a55340) (0xc000986640) Stream removed, broadcasting: 1\nI0205 21:33:17.697951 1275 log.go:172] (0xc000a55340) (0xc00047d4a0) Stream removed, broadcasting: 3\nI0205 21:33:17.698458 1275 log.go:172] (0xc000a55340) (0xc0008fe000) Stream removed, broadcasting: 5\nI0205 21:33:17.698491 1275 log.go:172] (0xc000a55340) (0xc000986640) Stream removed, broadcasting: 1\nI0205 21:33:17.698509 1275 log.go:172] (0xc000a55340) (0xc00047d4a0) Stream removed, broadcasting: 3\nI0205 21:33:17.698517 1275 log.go:172] (0xc000a55340) (0xc0008fe000) Stream removed, broadcasting: 5\n" Feb 5 21:33:17.705: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:33:17.705: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:33:27.732: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:33:27.732: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 5 21:33:27.732: INFO: Waiting for Pod statefulset-8418/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 5 21:33:38.091: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:33:38.091: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 5 21:33:47.748: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update Feb 5 21:33:47.748: INFO: Waiting for Pod statefulset-8418/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Feb 5 21:33:57.747: INFO: Waiting for StatefulSet statefulset-8418/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 5 21:34:07.749: INFO: Deleting all statefulset in ns statefulset-8418 Feb 5 21:34:07.754: INFO: Scaling statefulset ss2 to 0 Feb 5 21:34:37.790: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:34:37.796: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:34:37.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8418" for this suite. • [SLOW TEST:222.285 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":83,"skipped":1290,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:34:37.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0205 21:34:49.383560 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 21:34:49.383: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:34:49.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9059" for this suite. • [SLOW TEST:11.556 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":84,"skipped":1292,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:34:49.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:34:57.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8148" for this suite. • [SLOW TEST:8.356 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1301,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:34:57.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 5 21:34:57.943: INFO: PodSpec: initContainers in spec.initContainers Feb 5 21:36:00.667: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-dd3af10d-9715-4fa8-aa0f-aadb3e87a1cf", GenerateName:"", Namespace:"init-container-5013", SelfLink:"/api/v1/namespaces/init-container-5013/pods/pod-init-dd3af10d-9715-4fa8-aa0f-aadb3e87a1cf", UID:"20b5aee4-9ab4-4400-9956-0ee005f6b4e4", ResourceVersion:"6609137", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716535297, loc:(*time.Location)(0x7d100a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"943647041"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qsxxg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0011914c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsxxg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsxxg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qsxxg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0029f31a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00284f0e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029f3230)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0029f3250)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0029f3258), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0029f325c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535298, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535298, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535298, loc:(*time.Location)(0x7d100a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535297, loc:(*time.Location)(0x7d100a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc002ad8ee0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00235a850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00235a8c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c229540f0d536f6abd946668514abe0c34c17090347e19e18e07a03f2ca79428", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ad8f20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002ad8f00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0029f32df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:36:00.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5013" for this suite. • [SLOW TEST:62.892 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":86,"skipped":1305,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:36:00.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f5bdd34d-fcef-4af1-868e-5a449055f72b STEP: Creating a pod to test consume configMaps Feb 5 21:36:00.844: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f" in namespace "projected-2124" to be "success or failure" Feb 5 21:36:00.850: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.480093ms Feb 5 21:36:02.856: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012392512s Feb 5 21:36:04.864: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019692245s Feb 5 21:36:06.873: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028680102s Feb 5 21:36:08.881: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036778703s Feb 5 21:36:10.887: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.043074038s STEP: Saw pod success Feb 5 21:36:10.887: INFO: Pod "pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f" satisfied condition "success or failure" Feb 5 21:36:10.889: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f container projected-configmap-volume-test: STEP: delete the pod Feb 5 21:36:10.922: INFO: Waiting for pod pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f to disappear Feb 5 21:36:10.953: INFO: Pod pod-projected-configmaps-72dd1137-7bd3-4a0c-8fb4-c25f4e75549f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:36:10.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2124" for this suite. • [SLOW TEST:10.274 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:36:10.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-2496 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2496 STEP: Deleting pre-stop pod Feb 5 21:36:32.224: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:36:32.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2496" for this suite. • [SLOW TEST:21.296 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":88,"skipped":1353,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:36:32.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:36:33.421: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:36:35.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:36:37.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:36:39.482: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:36:41.464: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:36:43.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535393, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:36:46.615: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:36:46.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7999" for this suite. STEP: Destroying namespace "webhook-7999-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.884 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":89,"skipped":1365,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:36:47.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6f5c2ba5-83bc-4d18-b292-b372370942f1 STEP: Creating a pod to test consume secrets Feb 5 21:36:47.242: INFO: Waiting up to 5m0s for pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607" in namespace "secrets-6944" to be "success or failure" Feb 5 21:36:47.364: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Pending", Reason="", readiness=false. Elapsed: 121.9314ms Feb 5 21:36:49.381: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138448278s Feb 5 21:36:51.396: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153946064s Feb 5 21:36:53.403: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160658666s Feb 5 21:36:55.410: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168073841s Feb 5 21:36:57.418: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176189136s STEP: Saw pod success Feb 5 21:36:57.418: INFO: Pod "pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607" satisfied condition "success or failure" Feb 5 21:36:57.423: INFO: Trying to get logs from node jerma-node pod pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607 container secret-volume-test: STEP: delete the pod Feb 5 21:36:57.577: INFO: Waiting for pod pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607 to disappear Feb 5 21:36:57.584: INFO: Pod pod-secrets-41cbc611-6796-4a75-8e01-fd2b44cfd607 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:36:57.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6944" for this suite. • [SLOW TEST:10.449 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1368,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:36:57.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:36:58.629: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:37:00.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:37:02.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:37:04.933: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:37:07.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:07.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9600" for this suite. STEP: Destroying namespace "webhook-9600-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.387 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":91,"skipped":1382,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:07.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 5 21:37:08.632: INFO: Waiting up to 5m0s for pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d" in namespace "emptydir-5616" to be "success or failure" Feb 5 21:37:09.289: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 656.926554ms Feb 5 21:37:11.301: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.668200451s Feb 5 21:37:13.314: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.681814298s Feb 5 21:37:15.321: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.688890126s Feb 5 21:37:17.326: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693300455s Feb 5 21:37:19.332: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.699344426s Feb 5 21:37:21.337: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.704044323s STEP: Saw pod success Feb 5 21:37:21.337: INFO: Pod "pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d" satisfied condition "success or failure" Feb 5 21:37:21.339: INFO: Trying to get logs from node jerma-node pod pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d container test-container: STEP: delete the pod Feb 5 21:37:21.393: INFO: Waiting for pod pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d to disappear Feb 5 21:37:21.411: INFO: Pod pod-9ce29af5-d9ae-4f97-ba92-fb70e3cc381d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:21.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5616" for this suite. • [SLOW TEST:13.434 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1395,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:21.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0205 21:37:23.783084 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 21:37:23.783: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:23.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-159" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":93,"skipped":1399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:23.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Feb 5 21:37:25.191: INFO: Waiting up to 5m0s for pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1" in namespace "containers-3939" to be "success or failure" Feb 5 21:37:25.233: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1": Phase="Pending", Reason="", readiness=false. Elapsed: 41.910808ms Feb 5 21:37:27.300: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109018868s Feb 5 21:37:29.320: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129004937s Feb 5 21:37:31.327: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135727413s Feb 5 21:37:33.333: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142335722s STEP: Saw pod success Feb 5 21:37:33.334: INFO: Pod "client-containers-7def5273-9e7c-4632-822e-a60977b4cde1" satisfied condition "success or failure" Feb 5 21:37:33.337: INFO: Trying to get logs from node jerma-node pod client-containers-7def5273-9e7c-4632-822e-a60977b4cde1 container test-container: STEP: delete the pod Feb 5 21:37:33.385: INFO: Waiting for pod client-containers-7def5273-9e7c-4632-822e-a60977b4cde1 to disappear Feb 5 21:37:33.390: INFO: Pod client-containers-7def5273-9e7c-4632-822e-a60977b4cde1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:33.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3939" for this suite. • [SLOW TEST:9.557 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1431,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:33.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:37:34.296: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:37:36.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:37:38.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:37:40.336: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535454, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:37:43.381: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:44.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3796" for this suite. STEP: Destroying namespace "webhook-3796-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.748 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":95,"skipped":1431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:44.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:37:55.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9757" for this suite. • [SLOW TEST:11.168 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":96,"skipped":1456,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:37:55.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 5 21:38:05.469: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1331 PodName:pod-sharedvolume-25a1504d-7e6f-4247-955e-f411437e600c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 5 21:38:05.469: INFO: >>> kubeConfig: /root/.kube/config I0205 21:38:05.523291 9 log.go:172] (0xc001ad6580) (0xc002749360) Create stream I0205 21:38:05.523357 9 log.go:172] (0xc001ad6580) (0xc002749360) Stream added, broadcasting: 1 I0205 21:38:05.526905 9 log.go:172] (0xc001ad6580) Reply frame received for 1 I0205 21:38:05.526935 9 log.go:172] (0xc001ad6580) (0xc001ee0820) Create stream I0205 21:38:05.526943 9 log.go:172] (0xc001ad6580) (0xc001ee0820) Stream added, broadcasting: 3 I0205 21:38:05.528135 9 log.go:172] (0xc001ad6580) Reply frame received for 3 I0205 21:38:05.528153 9 log.go:172] (0xc001ad6580) (0xc002749400) Create stream I0205 21:38:05.528160 9 log.go:172] (0xc001ad6580) (0xc002749400) Stream added, broadcasting: 5 I0205 21:38:05.529292 9 log.go:172] (0xc001ad6580) Reply frame received for 5 I0205 21:38:05.588573 9 log.go:172] (0xc001ad6580) Data frame received for 3 I0205 21:38:05.588629 9 log.go:172] (0xc001ee0820) (3) Data frame handling I0205 21:38:05.588642 9 log.go:172] (0xc001ee0820) (3) Data frame sent I0205 21:38:05.656762 9 log.go:172] (0xc001ad6580) (0xc001ee0820) Stream removed, broadcasting: 3 I0205 21:38:05.656923 9 log.go:172] (0xc001ad6580) Data frame received for 1 I0205 21:38:05.656947 9 log.go:172] (0xc002749360) (1) Data frame handling I0205 21:38:05.656967 9 log.go:172] (0xc002749360) (1) Data frame sent I0205 21:38:05.657033 9 log.go:172] (0xc001ad6580) (0xc002749360) Stream removed, broadcasting: 1 I0205 21:38:05.657126 9 log.go:172] (0xc001ad6580) (0xc002749400) Stream removed, broadcasting: 5 I0205 21:38:05.657200 9 log.go:172] (0xc001ad6580) Go away received I0205 21:38:05.657248 9 log.go:172] (0xc001ad6580) (0xc002749360) Stream removed, broadcasting: 1 I0205 21:38:05.657272 9 log.go:172] (0xc001ad6580) (0xc001ee0820) Stream removed, broadcasting: 3 I0205 21:38:05.657281 9 log.go:172] (0xc001ad6580) (0xc002749400) Stream removed, broadcasting: 5 Feb 5 21:38:05.657: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:38:05.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1331" for this suite. • [SLOW TEST:10.354 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":97,"skipped":1471,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:38:05.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 5 21:38:05.932: INFO: Waiting up to 5m0s for pod "pod-caac1f67-8705-4418-974c-a62decd4918b" in namespace "emptydir-4435" to be "success or failure" Feb 5 21:38:06.004: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b": Phase="Pending", Reason="", readiness=false. Elapsed: 72.006684ms Feb 5 21:38:08.013: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080360016s Feb 5 21:38:10.019: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086962407s Feb 5 21:38:12.026: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093165468s Feb 5 21:38:14.031: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098826746s STEP: Saw pod success Feb 5 21:38:14.031: INFO: Pod "pod-caac1f67-8705-4418-974c-a62decd4918b" satisfied condition "success or failure" Feb 5 21:38:14.039: INFO: Trying to get logs from node jerma-node pod pod-caac1f67-8705-4418-974c-a62decd4918b container test-container: STEP: delete the pod Feb 5 21:38:14.067: INFO: Waiting for pod pod-caac1f67-8705-4418-974c-a62decd4918b to disappear Feb 5 21:38:14.130: INFO: Pod pod-caac1f67-8705-4418-974c-a62decd4918b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:38:14.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4435" for this suite. • [SLOW TEST:8.463 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1479,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:38:14.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7746, will wait for the garbage collector to delete the pods Feb 5 21:38:26.392: INFO: Deleting Job.batch foo took: 9.685218ms Feb 5 21:38:26.692: INFO: Terminating Job.batch foo pods took: 300.475357ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:39:12.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7746" for this suite. • [SLOW TEST:58.369 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":99,"skipped":1480,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:39:12.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 5 21:39:12.646: INFO: Waiting up to 5m0s for pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4" in namespace "emptydir-5844" to be "success or failure" Feb 5 21:39:12.658: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.388322ms Feb 5 21:39:14.664: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016943331s Feb 5 21:39:16.670: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023561741s Feb 5 21:39:18.677: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030147366s Feb 5 21:39:20.686: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039304179s STEP: Saw pod success Feb 5 21:39:20.686: INFO: Pod "pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4" satisfied condition "success or failure" Feb 5 21:39:20.690: INFO: Trying to get logs from node jerma-node pod pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4 container test-container: STEP: delete the pod Feb 5 21:39:20.797: INFO: Waiting for pod pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4 to disappear Feb 5 21:39:20.803: INFO: Pod pod-10534044-0ddf-4cee-ae6b-b0b6e4ca77d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:39:20.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5844" for this suite. • [SLOW TEST:8.305 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1484,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:39:20.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-49984b23-cf97-484c-8f41-621873ebd845 STEP: Creating a pod to test consume configMaps Feb 5 21:39:20.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111" in namespace "configmap-9284" to be "success or failure" Feb 5 21:39:20.985: INFO: Pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111": Phase="Pending", Reason="", readiness=false. Elapsed: 39.233891ms Feb 5 21:39:22.992: INFO: Pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046312922s Feb 5 21:39:25.044: INFO: Pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098409997s Feb 5 21:39:27.071: INFO: Pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12494744s STEP: Saw pod success Feb 5 21:39:27.071: INFO: Pod "pod-configmaps-a055825e-950d-48d0-831a-f2198098f111" satisfied condition "success or failure" Feb 5 21:39:27.075: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a055825e-950d-48d0-831a-f2198098f111 container configmap-volume-test: STEP: delete the pod Feb 5 21:39:27.130: INFO: Waiting for pod pod-configmaps-a055825e-950d-48d0-831a-f2198098f111 to disappear Feb 5 21:39:27.220: INFO: Pod pod-configmaps-a055825e-950d-48d0-831a-f2198098f111 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:39:27.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9284" for this suite. • [SLOW TEST:6.419 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1486,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:39:27.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1127/configmap-test-2e5fcc68-1663-4b83-a502-ccbbb45bba09 STEP: Creating a pod to test consume configMaps Feb 5 21:39:27.631: INFO: Waiting up to 5m0s for pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2" in namespace "configmap-1127" to be "success or failure" Feb 5 21:39:27.642: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.571661ms Feb 5 21:39:29.652: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021409799s Feb 5 21:39:31.661: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030237963s Feb 5 21:39:33.673: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042273676s Feb 5 21:39:35.736: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105357838s STEP: Saw pod success Feb 5 21:39:35.736: INFO: Pod "pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2" satisfied condition "success or failure" Feb 5 21:39:35.739: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2 container env-test: STEP: delete the pod Feb 5 21:39:36.720: INFO: Waiting for pod pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2 to disappear Feb 5 21:39:36.731: INFO: Pod pod-configmaps-5eeebf57-6318-4154-a462-c5d8f80193d2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:39:36.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1127" for this suite. • [SLOW TEST:9.547 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:39:36.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:39:48.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3314" for this suite. • [SLOW TEST:11.282 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":103,"skipped":1528,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:39:48.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-zcv7 STEP: Creating a pod to test atomic-volume-subpath Feb 5 21:39:48.246: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zcv7" in namespace "subpath-8267" to be "success or failure" Feb 5 21:39:48.251: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.357458ms Feb 5 21:39:50.258: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012357827s Feb 5 21:39:52.263: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016919081s Feb 5 21:39:54.271: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02494171s Feb 5 21:39:56.282: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 8.036258219s Feb 5 21:39:58.290: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 10.0435987s Feb 5 21:40:00.771: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.52528964s Feb 5 21:40:02.779: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.532597751s Feb 5 21:40:04.786: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.540217028s Feb 5 21:40:06.792: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.545875627s Feb 5 21:40:08.798: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.552256723s Feb 5 21:40:10.806: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.560420792s Feb 5 21:40:12.814: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 24.567872625s Feb 5 21:40:14.822: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Running", Reason="", readiness=true. Elapsed: 26.575872426s Feb 5 21:40:16.828: INFO: Pod "pod-subpath-test-secret-zcv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.582415932s STEP: Saw pod success Feb 5 21:40:16.829: INFO: Pod "pod-subpath-test-secret-zcv7" satisfied condition "success or failure" Feb 5 21:40:16.832: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-zcv7 container test-container-subpath-secret-zcv7: STEP: delete the pod Feb 5 21:40:16.903: INFO: Waiting for pod pod-subpath-test-secret-zcv7 to disappear Feb 5 21:40:16.915: INFO: Pod pod-subpath-test-secret-zcv7 no longer exists STEP: Deleting pod pod-subpath-test-secret-zcv7 Feb 5 21:40:16.915: INFO: Deleting pod "pod-subpath-test-secret-zcv7" in namespace "subpath-8267" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:40:16.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8267" for this suite. • [SLOW TEST:28.863 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:40:16.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2875 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2875 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2875 Feb 5 21:40:17.238: INFO: Found 0 stateful pods, waiting for 1 Feb 5 21:40:27.249: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 5 21:40:27.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:40:30.126: INFO: stderr: "I0205 21:40:29.879096 1295 log.go:172] (0xc00057b1e0) (0xc000639e00) Create stream\nI0205 21:40:29.879231 1295 log.go:172] (0xc00057b1e0) (0xc000639e00) Stream added, broadcasting: 1\nI0205 21:40:29.883802 1295 log.go:172] (0xc00057b1e0) Reply frame received for 1\nI0205 21:40:29.883878 1295 log.go:172] (0xc00057b1e0) (0xc0005f86e0) Create stream\nI0205 21:40:29.883900 1295 log.go:172] (0xc00057b1e0) (0xc0005f86e0) Stream added, broadcasting: 3\nI0205 21:40:29.885737 1295 log.go:172] (0xc00057b1e0) Reply frame received for 3\nI0205 21:40:29.885765 1295 log.go:172] (0xc00057b1e0) (0xc000639ea0) Create stream\nI0205 21:40:29.885774 1295 log.go:172] (0xc00057b1e0) (0xc000639ea0) Stream added, broadcasting: 5\nI0205 21:40:29.888195 1295 log.go:172] (0xc00057b1e0) Reply frame received for 5\nI0205 21:40:29.993709 1295 log.go:172] (0xc00057b1e0) Data frame received for 5\nI0205 21:40:29.993779 1295 log.go:172] (0xc000639ea0) (5) Data frame handling\nI0205 21:40:29.993799 1295 log.go:172] (0xc000639ea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:40:30.038875 1295 log.go:172] (0xc00057b1e0) Data frame received for 3\nI0205 21:40:30.038946 1295 log.go:172] (0xc0005f86e0) (3) Data frame handling\nI0205 21:40:30.038975 1295 log.go:172] (0xc0005f86e0) (3) Data frame sent\nI0205 21:40:30.118484 1295 log.go:172] (0xc00057b1e0) Data frame received for 1\nI0205 21:40:30.118685 1295 log.go:172] (0xc00057b1e0) (0xc0005f86e0) Stream removed, broadcasting: 3\nI0205 21:40:30.118744 1295 log.go:172] (0xc000639e00) (1) Data frame handling\nI0205 21:40:30.118763 1295 log.go:172] (0xc000639e00) (1) Data frame sent\nI0205 21:40:30.118783 1295 log.go:172] (0xc00057b1e0) (0xc000639ea0) Stream removed, broadcasting: 5\nI0205 21:40:30.118868 1295 log.go:172] (0xc00057b1e0) (0xc000639e00) Stream removed, broadcasting: 1\nI0205 21:40:30.118909 1295 log.go:172] (0xc00057b1e0) Go away received\nI0205 21:40:30.119428 1295 log.go:172] (0xc00057b1e0) (0xc000639e00) Stream removed, broadcasting: 1\nI0205 21:40:30.119442 1295 log.go:172] (0xc00057b1e0) (0xc0005f86e0) Stream removed, broadcasting: 3\nI0205 21:40:30.119445 1295 log.go:172] (0xc00057b1e0) (0xc000639ea0) Stream removed, broadcasting: 5\n" Feb 5 21:40:30.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:40:30.126: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:40:30.132: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 5 21:40:40.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:40:40.139: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:40:40.165: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:40:40.165: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:30 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:40:40.165: INFO: Feb 5 21:40:40.165: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 5 21:40:41.172: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985100417s Feb 5 21:40:42.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978473159s Feb 5 21:40:43.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.922911938s Feb 5 21:40:44.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.91569549s Feb 5 21:40:45.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.906810503s Feb 5 21:40:46.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.898548233s Feb 5 21:40:47.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.656237195s Feb 5 21:40:48.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.25118587s Feb 5 21:40:49.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 240.9054ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2875 Feb 5 21:40:50.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:40:51.244: INFO: stderr: "I0205 21:40:51.075651 1317 log.go:172] (0xc000968000) (0xc00078a000) Create stream\nI0205 21:40:51.075843 1317 log.go:172] (0xc000968000) (0xc00078a000) Stream added, broadcasting: 1\nI0205 21:40:51.079354 1317 log.go:172] (0xc000968000) Reply frame received for 1\nI0205 21:40:51.079383 1317 log.go:172] (0xc000968000) (0xc00078a0a0) Create stream\nI0205 21:40:51.079391 1317 log.go:172] (0xc000968000) (0xc00078a0a0) Stream added, broadcasting: 3\nI0205 21:40:51.080293 1317 log.go:172] (0xc000968000) Reply frame received for 3\nI0205 21:40:51.080319 1317 log.go:172] (0xc000968000) (0xc0008a0000) Create stream\nI0205 21:40:51.080327 1317 log.go:172] (0xc000968000) (0xc0008a0000) Stream added, broadcasting: 5\nI0205 21:40:51.081300 1317 log.go:172] (0xc000968000) Reply frame received for 5\nI0205 21:40:51.161424 1317 log.go:172] (0xc000968000) Data frame received for 5\nI0205 21:40:51.161541 1317 log.go:172] (0xc0008a0000) (5) Data frame handling\nI0205 21:40:51.161578 1317 log.go:172] (0xc0008a0000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:40:51.161657 1317 log.go:172] (0xc000968000) Data frame received for 3\nI0205 21:40:51.161680 1317 log.go:172] (0xc00078a0a0) (3) Data frame handling\nI0205 21:40:51.161704 1317 log.go:172] (0xc00078a0a0) (3) Data frame sent\nI0205 21:40:51.231464 1317 log.go:172] (0xc000968000) (0xc00078a0a0) Stream removed, broadcasting: 3\nI0205 21:40:51.231580 1317 log.go:172] (0xc000968000) Data frame received for 1\nI0205 21:40:51.231609 1317 log.go:172] (0xc000968000) (0xc0008a0000) Stream removed, broadcasting: 5\nI0205 21:40:51.231650 1317 log.go:172] (0xc00078a000) (1) Data frame handling\nI0205 21:40:51.231676 1317 log.go:172] (0xc00078a000) (1) Data frame sent\nI0205 21:40:51.231718 1317 log.go:172] (0xc000968000) (0xc00078a000) Stream removed, broadcasting: 1\nI0205 21:40:51.231734 1317 log.go:172] (0xc000968000) Go away received\nI0205 21:40:51.233108 1317 log.go:172] (0xc000968000) (0xc00078a000) Stream removed, broadcasting: 1\nI0205 21:40:51.233215 1317 log.go:172] (0xc000968000) (0xc00078a0a0) Stream removed, broadcasting: 3\nI0205 21:40:51.233228 1317 log.go:172] (0xc000968000) (0xc0008a0000) Stream removed, broadcasting: 5\n" Feb 5 21:40:51.245: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:40:51.245: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:40:51.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:40:51.718: INFO: stderr: "I0205 21:40:51.482487 1335 log.go:172] (0xc00090b810) (0xc0008c66e0) Create stream\nI0205 21:40:51.482865 1335 log.go:172] (0xc00090b810) (0xc0008c66e0) Stream added, broadcasting: 1\nI0205 21:40:51.498885 1335 log.go:172] (0xc00090b810) Reply frame received for 1\nI0205 21:40:51.499219 1335 log.go:172] (0xc00090b810) (0xc00060fc20) Create stream\nI0205 21:40:51.499260 1335 log.go:172] (0xc00090b810) (0xc00060fc20) Stream added, broadcasting: 3\nI0205 21:40:51.502694 1335 log.go:172] (0xc00090b810) Reply frame received for 3\nI0205 21:40:51.502815 1335 log.go:172] (0xc00090b810) (0xc000558820) Create stream\nI0205 21:40:51.502830 1335 log.go:172] (0xc00090b810) (0xc000558820) Stream added, broadcasting: 5\nI0205 21:40:51.504493 1335 log.go:172] (0xc00090b810) Reply frame received for 5\nI0205 21:40:51.572016 1335 log.go:172] (0xc00090b810) Data frame received for 5\nI0205 21:40:51.572336 1335 log.go:172] (0xc000558820) (5) Data frame handling\nI0205 21:40:51.572668 1335 log.go:172] (0xc000558820) (5) Data frame sent\nI0205 21:40:51.572889 1335 log.go:172] (0xc00090b810) Data frame received for 3\nI0205 21:40:51.572943 1335 log.go:172] (0xc00060fc20) (3) Data frame handling\nI0205 21:40:51.573016 1335 log.go:172] (0xc00060fc20) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0205 21:40:51.702435 1335 log.go:172] (0xc00090b810) Data frame received for 1\nI0205 21:40:51.702666 1335 log.go:172] (0xc00090b810) (0xc00060fc20) Stream removed, broadcasting: 3\nI0205 21:40:51.702754 1335 log.go:172] (0xc0008c66e0) (1) Data frame handling\nI0205 21:40:51.702794 1335 log.go:172] (0xc0008c66e0) (1) Data frame sent\nI0205 21:40:51.702864 1335 log.go:172] (0xc00090b810) (0xc000558820) Stream removed, broadcasting: 5\nI0205 21:40:51.702922 1335 log.go:172] (0xc00090b810) (0xc0008c66e0) Stream removed, broadcasting: 1\nI0205 21:40:51.702947 1335 log.go:172] (0xc00090b810) Go away received\nI0205 21:40:51.703984 1335 log.go:172] (0xc00090b810) (0xc0008c66e0) Stream removed, broadcasting: 1\nI0205 21:40:51.704152 1335 log.go:172] (0xc00090b810) (0xc00060fc20) Stream removed, broadcasting: 3\nI0205 21:40:51.704204 1335 log.go:172] (0xc00090b810) (0xc000558820) Stream removed, broadcasting: 5\n" Feb 5 21:40:51.718: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:40:51.719: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:40:51.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:40:52.101: INFO: stderr: "I0205 21:40:51.893219 1355 log.go:172] (0xc0003c0dc0) (0xc0005afae0) Create stream\nI0205 21:40:51.893493 1355 log.go:172] (0xc0003c0dc0) (0xc0005afae0) Stream added, broadcasting: 1\nI0205 21:40:51.898026 1355 log.go:172] (0xc0003c0dc0) Reply frame received for 1\nI0205 21:40:51.898082 1355 log.go:172] (0xc0003c0dc0) (0xc000572000) Create stream\nI0205 21:40:51.898093 1355 log.go:172] (0xc0003c0dc0) (0xc000572000) Stream added, broadcasting: 3\nI0205 21:40:51.899324 1355 log.go:172] (0xc0003c0dc0) Reply frame received for 3\nI0205 21:40:51.899345 1355 log.go:172] (0xc0003c0dc0) (0xc00028e000) Create stream\nI0205 21:40:51.899351 1355 log.go:172] (0xc0003c0dc0) (0xc00028e000) Stream added, broadcasting: 5\nI0205 21:40:51.900590 1355 log.go:172] (0xc0003c0dc0) Reply frame received for 5\nI0205 21:40:51.975018 1355 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0205 21:40:51.975219 1355 log.go:172] (0xc00028e000) (5) Data frame handling\nI0205 21:40:51.975274 1355 log.go:172] (0xc00028e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0205 21:40:51.975811 1355 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0205 21:40:51.975847 1355 log.go:172] (0xc00028e000) (5) Data frame handling\nI0205 21:40:51.975901 1355 log.go:172] (0xc00028e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0205 21:40:51.977843 1355 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0205 21:40:51.977881 1355 log.go:172] (0xc000572000) (3) Data frame handling\nI0205 21:40:51.977924 1355 log.go:172] (0xc000572000) (3) Data frame sent\nI0205 21:40:51.978641 1355 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0205 21:40:51.978653 1355 log.go:172] (0xc00028e000) (5) Data frame handling\nI0205 21:40:51.978679 1355 log.go:172] (0xc00028e000) (5) Data frame sent\n+ true\nI0205 21:40:52.090286 1355 log.go:172] (0xc0003c0dc0) (0xc000572000) Stream removed, broadcasting: 3\nI0205 21:40:52.090883 1355 log.go:172] (0xc0003c0dc0) Data frame received for 1\nI0205 21:40:52.091062 1355 log.go:172] (0xc0003c0dc0) (0xc00028e000) Stream removed, broadcasting: 5\nI0205 21:40:52.091149 1355 log.go:172] (0xc0005afae0) (1) Data frame handling\nI0205 21:40:52.091254 1355 log.go:172] (0xc0005afae0) (1) Data frame sent\nI0205 21:40:52.091343 1355 log.go:172] (0xc0003c0dc0) (0xc0005afae0) Stream removed, broadcasting: 1\nI0205 21:40:52.091400 1355 log.go:172] (0xc0003c0dc0) Go away received\nI0205 21:40:52.092322 1355 log.go:172] (0xc0003c0dc0) (0xc0005afae0) Stream removed, broadcasting: 1\nI0205 21:40:52.092373 1355 log.go:172] (0xc0003c0dc0) (0xc000572000) Stream removed, broadcasting: 3\nI0205 21:40:52.092406 1355 log.go:172] (0xc0003c0dc0) (0xc00028e000) Stream removed, broadcasting: 5\n" Feb 5 21:40:52.101: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 5 21:40:52.101: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 5 21:40:52.114: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:40:52.114: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 5 21:40:52.114: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 5 21:40:52.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:40:52.541: INFO: stderr: "I0205 21:40:52.319112 1373 log.go:172] (0xc0009b82c0) (0xc0007d5e00) Create stream\nI0205 21:40:52.319491 1373 log.go:172] (0xc0009b82c0) (0xc0007d5e00) Stream added, broadcasting: 1\nI0205 21:40:52.327690 1373 log.go:172] (0xc0009b82c0) Reply frame received for 1\nI0205 21:40:52.328515 1373 log.go:172] (0xc0009b82c0) (0xc000020280) Create stream\nI0205 21:40:52.328560 1373 log.go:172] (0xc0009b82c0) (0xc000020280) Stream added, broadcasting: 3\nI0205 21:40:52.331610 1373 log.go:172] (0xc0009b82c0) Reply frame received for 3\nI0205 21:40:52.331636 1373 log.go:172] (0xc0009b82c0) (0xc000020460) Create stream\nI0205 21:40:52.331645 1373 log.go:172] (0xc0009b82c0) (0xc000020460) Stream added, broadcasting: 5\nI0205 21:40:52.335491 1373 log.go:172] (0xc0009b82c0) Reply frame received for 5\nI0205 21:40:52.427236 1373 log.go:172] (0xc0009b82c0) Data frame received for 3\nI0205 21:40:52.427368 1373 log.go:172] (0xc000020280) (3) Data frame handling\nI0205 21:40:52.427404 1373 log.go:172] (0xc000020280) (3) Data frame sent\nI0205 21:40:52.427466 1373 log.go:172] (0xc0009b82c0) Data frame received for 5\nI0205 21:40:52.427513 1373 log.go:172] (0xc000020460) (5) Data frame handling\nI0205 21:40:52.427531 1373 log.go:172] (0xc000020460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:40:52.525573 1373 log.go:172] (0xc0009b82c0) (0xc000020280) Stream removed, broadcasting: 3\nI0205 21:40:52.525879 1373 log.go:172] (0xc0009b82c0) Data frame received for 1\nI0205 21:40:52.525896 1373 log.go:172] (0xc0007d5e00) (1) Data frame handling\nI0205 21:40:52.525922 1373 log.go:172] (0xc0007d5e00) (1) Data frame sent\nI0205 21:40:52.525932 1373 log.go:172] (0xc0009b82c0) (0xc0007d5e00) Stream removed, broadcasting: 1\nI0205 21:40:52.525981 1373 log.go:172] (0xc0009b82c0) (0xc000020460) Stream removed, broadcasting: 5\nI0205 21:40:52.526104 1373 log.go:172] (0xc0009b82c0) Go away received\nI0205 21:40:52.527105 1373 log.go:172] (0xc0009b82c0) (0xc0007d5e00) Stream removed, broadcasting: 1\nI0205 21:40:52.527122 1373 log.go:172] (0xc0009b82c0) (0xc000020280) Stream removed, broadcasting: 3\nI0205 21:40:52.527131 1373 log.go:172] (0xc0009b82c0) (0xc000020460) Stream removed, broadcasting: 5\n" Feb 5 21:40:52.542: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:40:52.542: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:40:52.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:40:52.894: INFO: stderr: "I0205 21:40:52.687027 1392 log.go:172] (0xc000ac14a0) (0xc0008e48c0) Create stream\nI0205 21:40:52.687199 1392 log.go:172] (0xc000ac14a0) (0xc0008e48c0) Stream added, broadcasting: 1\nI0205 21:40:52.692295 1392 log.go:172] (0xc000ac14a0) Reply frame received for 1\nI0205 21:40:52.692365 1392 log.go:172] (0xc000ac14a0) (0xc0006ada40) Create stream\nI0205 21:40:52.692370 1392 log.go:172] (0xc000ac14a0) (0xc0006ada40) Stream added, broadcasting: 3\nI0205 21:40:52.693114 1392 log.go:172] (0xc000ac14a0) Reply frame received for 3\nI0205 21:40:52.693136 1392 log.go:172] (0xc000ac14a0) (0xc000680640) Create stream\nI0205 21:40:52.693141 1392 log.go:172] (0xc000ac14a0) (0xc000680640) Stream added, broadcasting: 5\nI0205 21:40:52.693960 1392 log.go:172] (0xc000ac14a0) Reply frame received for 5\nI0205 21:40:52.772321 1392 log.go:172] (0xc000ac14a0) Data frame received for 5\nI0205 21:40:52.772490 1392 log.go:172] (0xc000680640) (5) Data frame handling\nI0205 21:40:52.772521 1392 log.go:172] (0xc000680640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:40:52.793559 1392 log.go:172] (0xc000ac14a0) Data frame received for 3\nI0205 21:40:52.793595 1392 log.go:172] (0xc0006ada40) (3) Data frame handling\nI0205 21:40:52.793615 1392 log.go:172] (0xc0006ada40) (3) Data frame sent\nI0205 21:40:52.883790 1392 log.go:172] (0xc000ac14a0) Data frame received for 1\nI0205 21:40:52.883898 1392 log.go:172] (0xc000ac14a0) (0xc0006ada40) Stream removed, broadcasting: 3\nI0205 21:40:52.883934 1392 log.go:172] (0xc0008e48c0) (1) Data frame handling\nI0205 21:40:52.883958 1392 log.go:172] (0xc0008e48c0) (1) Data frame sent\nI0205 21:40:52.883979 1392 log.go:172] (0xc000ac14a0) (0xc000680640) Stream removed, broadcasting: 5\nI0205 21:40:52.883996 1392 log.go:172] (0xc000ac14a0) (0xc0008e48c0) Stream removed, broadcasting: 1\nI0205 21:40:52.884017 1392 log.go:172] (0xc000ac14a0) Go away received\nI0205 21:40:52.884683 1392 log.go:172] (0xc000ac14a0) (0xc0008e48c0) Stream removed, broadcasting: 1\nI0205 21:40:52.884731 1392 log.go:172] (0xc000ac14a0) (0xc0006ada40) Stream removed, broadcasting: 3\nI0205 21:40:52.884740 1392 log.go:172] (0xc000ac14a0) (0xc000680640) Stream removed, broadcasting: 5\n" Feb 5 21:40:52.895: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:40:52.895: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:40:52.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 5 21:40:53.256: INFO: stderr: "I0205 21:40:53.058419 1411 log.go:172] (0xc0000ec2c0) (0xc000601ae0) Create stream\nI0205 21:40:53.058662 1411 log.go:172] (0xc0000ec2c0) (0xc000601ae0) Stream added, broadcasting: 1\nI0205 21:40:53.062536 1411 log.go:172] (0xc0000ec2c0) Reply frame received for 1\nI0205 21:40:53.062612 1411 log.go:172] (0xc0000ec2c0) (0xc000601b80) Create stream\nI0205 21:40:53.062621 1411 log.go:172] (0xc0000ec2c0) (0xc000601b80) Stream added, broadcasting: 3\nI0205 21:40:53.063579 1411 log.go:172] (0xc0000ec2c0) Reply frame received for 3\nI0205 21:40:53.063607 1411 log.go:172] (0xc0000ec2c0) (0xc0003cb540) Create stream\nI0205 21:40:53.063613 1411 log.go:172] (0xc0000ec2c0) (0xc0003cb540) Stream added, broadcasting: 5\nI0205 21:40:53.064761 1411 log.go:172] (0xc0000ec2c0) Reply frame received for 5\nI0205 21:40:53.139322 1411 log.go:172] (0xc0000ec2c0) Data frame received for 5\nI0205 21:40:53.139470 1411 log.go:172] (0xc0003cb540) (5) Data frame handling\nI0205 21:40:53.139521 1411 log.go:172] (0xc0003cb540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0205 21:40:53.164258 1411 log.go:172] (0xc0000ec2c0) Data frame received for 3\nI0205 21:40:53.164313 1411 log.go:172] (0xc000601b80) (3) Data frame handling\nI0205 21:40:53.164331 1411 log.go:172] (0xc000601b80) (3) Data frame sent\nI0205 21:40:53.243373 1411 log.go:172] (0xc0000ec2c0) Data frame received for 1\nI0205 21:40:53.243547 1411 log.go:172] (0xc0000ec2c0) (0xc000601b80) Stream removed, broadcasting: 3\nI0205 21:40:53.243616 1411 log.go:172] (0xc000601ae0) (1) Data frame handling\nI0205 21:40:53.243663 1411 log.go:172] (0xc000601ae0) (1) Data frame sent\nI0205 21:40:53.243689 1411 log.go:172] (0xc0000ec2c0) (0xc0003cb540) Stream removed, broadcasting: 5\nI0205 21:40:53.243808 1411 log.go:172] (0xc0000ec2c0) (0xc000601ae0) Stream removed, broadcasting: 1\nI0205 21:40:53.243834 1411 log.go:172] (0xc0000ec2c0) Go away received\nI0205 21:40:53.245079 1411 log.go:172] (0xc0000ec2c0) (0xc000601ae0) Stream removed, broadcasting: 1\nI0205 21:40:53.245094 1411 log.go:172] (0xc0000ec2c0) (0xc000601b80) Stream removed, broadcasting: 3\nI0205 21:40:53.245101 1411 log.go:172] (0xc0000ec2c0) (0xc0003cb540) Stream removed, broadcasting: 5\n" Feb 5 21:40:53.257: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 5 21:40:53.257: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 5 21:40:53.257: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:40:53.262: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Feb 5 21:41:03.272: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:41:03.272: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:41:03.272: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 5 21:41:03.293: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:03.293: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:03.293: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:03.293: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:03.293: INFO: Feb 5 21:41:03.293: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:04.955: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:04.955: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:04.955: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:04.956: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:04.956: INFO: Feb 5 21:41:04.956: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:05.962: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:05.962: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:05.962: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:05.962: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:05.962: INFO: Feb 5 21:41:05.962: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:07.357: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:07.357: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:07.357: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:07.357: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:07.357: INFO: Feb 5 21:41:07.357: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:08.368: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:08.368: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:08.368: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:08.368: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:08.368: INFO: Feb 5 21:41:08.368: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:09.376: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:09.376: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:09.376: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:09.376: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:09.376: INFO: Feb 5 21:41:09.376: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:10.384: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:10.384: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:10.384: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:10.384: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:10.384: INFO: Feb 5 21:41:10.384: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:11.394: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:11.394: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:17 +0000 UTC }] Feb 5 21:41:11.394: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:11.394: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:11.394: INFO: Feb 5 21:41:11.394: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 5 21:41:12.405: INFO: POD NODE PHASE GRACE CONDITIONS Feb 5 21:41:12.405: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:12.405: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-05 21:40:40 +0000 UTC }] Feb 5 21:41:12.405: INFO: Feb 5 21:41:12.405: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2875 Feb 5 21:41:13.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:41:13.604: INFO: rc: 1 Feb 5 21:41:13.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 5 21:41:23.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:41:23.786: INFO: rc: 1 Feb 5 21:41:23.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:41:33.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:41:33.951: INFO: rc: 1 Feb 5 21:41:33.951: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:41:43.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:41:44.107: INFO: rc: 1 Feb 5 21:41:44.107: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:41:54.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:41:54.252: INFO: rc: 1 Feb 5 21:41:54.252: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:04.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:04.367: INFO: rc: 1 Feb 5 21:42:04.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:14.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:14.540: INFO: rc: 1 Feb 5 21:42:14.540: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:24.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:24.730: INFO: rc: 1 Feb 5 21:42:24.730: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:34.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:34.917: INFO: rc: 1 Feb 5 21:42:34.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:44.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:45.031: INFO: rc: 1 Feb 5 21:42:45.032: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:42:55.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:42:55.211: INFO: rc: 1 Feb 5 21:42:55.212: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:05.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:05.390: INFO: rc: 1 Feb 5 21:43:05.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:15.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:15.560: INFO: rc: 1 Feb 5 21:43:15.560: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:25.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:25.714: INFO: rc: 1 Feb 5 21:43:25.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:35.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:35.824: INFO: rc: 1 Feb 5 21:43:35.825: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:45.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:45.984: INFO: rc: 1 Feb 5 21:43:45.984: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:43:55.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:43:56.170: INFO: rc: 1 Feb 5 21:43:56.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:06.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:06.291: INFO: rc: 1 Feb 5 21:44:06.292: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:16.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:16.462: INFO: rc: 1 Feb 5 21:44:16.462: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:26.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:26.683: INFO: rc: 1 Feb 5 21:44:26.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:36.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:36.880: INFO: rc: 1 Feb 5 21:44:36.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:46.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:47.022: INFO: rc: 1 Feb 5 21:44:47.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:44:57.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:44:57.204: INFO: rc: 1 Feb 5 21:44:57.204: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:07.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:07.392: INFO: rc: 1 Feb 5 21:45:07.392: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:17.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:17.572: INFO: rc: 1 Feb 5 21:45:17.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:27.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:27.743: INFO: rc: 1 Feb 5 21:45:27.743: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:37.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:37.937: INFO: rc: 1 Feb 5 21:45:37.938: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:47.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:48.107: INFO: rc: 1 Feb 5 21:45:48.107: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:45:58.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:45:58.364: INFO: rc: 1 Feb 5 21:45:58.365: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:46:08.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:46:08.560: INFO: rc: 1 Feb 5 21:46:08.561: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Feb 5 21:46:18.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2875 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 5 21:46:18.808: INFO: rc: 1 Feb 5 21:46:18.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Feb 5 21:46:18.808: INFO: Scaling statefulset ss to 0 Feb 5 21:46:18.819: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Feb 5 21:46:18.823: INFO: Deleting all statefulset in ns statefulset-2875 Feb 5 21:46:18.825: INFO: Scaling statefulset ss to 0 Feb 5 21:46:18.833: INFO: Waiting for statefulset status.replicas updated to 0 Feb 5 21:46:18.835: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:46:18.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2875" for this suite. • [SLOW TEST:361.966 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":105,"skipped":1568,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:46:18.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:46:18.994: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Feb 5 21:46:19.003: INFO: Pod name sample-pod: Found 0 pods out of 1 Feb 5 21:46:24.018: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 5 21:46:26.031: INFO: Creating deployment "test-rolling-update-deployment" Feb 5 21:46:26.038: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Feb 5 21:46:26.057: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Feb 5 21:46:28.071: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Feb 5 21:46:28.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:46:30.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:46:32.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716535986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:46:34.081: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 5 21:46:34.092: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2424 /apis/apps/v1/namespaces/deployment-2424/deployments/test-rolling-update-deployment 5c34953c-e89c-4006-8b78-0cdfc6bdae8c 6611448 1 2020-02-05 21:46:26 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e68a78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-05 21:46:26 +0000 UTC,LastTransitionTime:2020-02-05 21:46:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-05 21:46:32 +0000 UTC,LastTransitionTime:2020-02-05 21:46:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Feb 5 21:46:34.095: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-2424 /apis/apps/v1/namespaces/deployment-2424/replicasets/test-rolling-update-deployment-67cf4f6444 bc331e91-fe7c-4cee-b22b-26c93691e7fc 6611437 1 2020-02-05 21:46:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 5c34953c-e89c-4006-8b78-0cdfc6bdae8c 0xc004e68f17 0xc004e68f18}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004e68f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 5 21:46:34.095: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Feb 5 21:46:34.095: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2424 /apis/apps/v1/namespaces/deployment-2424/replicasets/test-rolling-update-controller 76b2233f-613d-44af-a6e3-61d23ba5118b 6611446 2 2020-02-05 21:46:18 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 5c34953c-e89c-4006-8b78-0cdfc6bdae8c 0xc004e68e47 0xc004e68e48}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004e68ea8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 5 21:46:34.100: INFO: Pod "test-rolling-update-deployment-67cf4f6444-9sj94" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-9sj94 test-rolling-update-deployment-67cf4f6444- deployment-2424 /api/v1/namespaces/deployment-2424/pods/test-rolling-update-deployment-67cf4f6444-9sj94 387482e3-df0a-4ab5-9f23-96cb324db3aa 6611436 0 2020-02-05 21:46:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 bc331e91-fe7c-4cee-b22b-26c93691e7fc 0xc004e693e7 0xc004e693e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lkp65,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lkp65,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lkp65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:46:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:46:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:46:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-05 21:46:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 21:46:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://17424bb1510b45470b7c97c79fcc824f129af4afedd753ce480586313b8b0fa6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:46:34.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2424" for this suite. • [SLOW TEST:15.221 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":106,"skipped":1577,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:46:34.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:46:42.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1808" for this suite. • [SLOW TEST:8.410 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":107,"skipped":1583,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:46:42.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-6bdc25d9-2e45-4df7-becd-ed0c88cd790e STEP: Creating a pod to test consume secrets Feb 5 21:46:42.677: INFO: Waiting up to 5m0s for pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e" in namespace "secrets-2049" to be "success or failure" Feb 5 21:46:42.701: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.406371ms Feb 5 21:46:44.707: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02999042s Feb 5 21:46:46.714: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03732602s Feb 5 21:46:48.719: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042742359s Feb 5 21:46:50.727: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049882811s Feb 5 21:46:52.737: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.060121039s Feb 5 21:46:54.743: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.066504852s STEP: Saw pod success Feb 5 21:46:54.744: INFO: Pod "pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e" satisfied condition "success or failure" Feb 5 21:46:54.747: INFO: Trying to get logs from node jerma-node pod pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e container secret-volume-test: STEP: delete the pod Feb 5 21:46:54.928: INFO: Waiting for pod pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e to disappear Feb 5 21:46:55.023: INFO: Pod pod-secrets-60e2feb5-1e56-4ab5-9450-f892b3470e9e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:46:55.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2049" for this suite. • [SLOW TEST:12.505 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1593,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:46:55.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-49ff1c07-a758-4d21-aeca-38fc9cb9da81 STEP: Creating secret with name secret-projected-all-test-volume-b07605db-f998-46e2-b5e4-c937542cdb60 STEP: Creating a pod to test Check all projections for projected volume plugin Feb 5 21:46:55.190: INFO: Waiting up to 5m0s for pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9" in namespace "projected-8321" to be "success or failure" Feb 5 21:46:55.219: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.286385ms Feb 5 21:46:57.226: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036027963s Feb 5 21:46:59.237: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047049735s Feb 5 21:47:01.244: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053939517s Feb 5 21:47:03.253: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06296129s STEP: Saw pod success Feb 5 21:47:03.253: INFO: Pod "projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9" satisfied condition "success or failure" Feb 5 21:47:03.259: INFO: Trying to get logs from node jerma-node pod projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9 container projected-all-volume-test: STEP: delete the pod Feb 5 21:47:03.314: INFO: Waiting for pod projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9 to disappear Feb 5 21:47:03.372: INFO: Pod projected-volume-4bf78215-fbef-4611-b75e-87b792eab5d9 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:47:03.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8321" for this suite. • [SLOW TEST:8.349 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1601,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:47:03.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:47:03.435: INFO: Creating ReplicaSet my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7 Feb 5 21:47:03.532: INFO: Pod name my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7: Found 0 pods out of 1 Feb 5 21:47:08.552: INFO: Pod name my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7: Found 1 pods out of 1 Feb 5 21:47:08.552: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7" is running Feb 5 21:47:10.565: INFO: Pod "my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7-xbjpv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:47:03 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:47:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:47:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-05 21:47:03 +0000 UTC Reason: Message:}]) Feb 5 21:47:10.565: INFO: Trying to dial the pod Feb 5 21:47:15.595: INFO: Controller my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7: Got expected result from replica 1 [my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7-xbjpv]: "my-hostname-basic-d7ae82e6-cecf-4637-a3c5-9f8659bfb1f7-xbjpv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:47:15.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1397" for this suite. • [SLOW TEST:12.227 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":110,"skipped":1601,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:47:15.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0037c20b-31d4-4317-9812-3c7a59b06330 STEP: Creating a pod to test consume secrets Feb 5 21:47:15.823: INFO: Waiting up to 5m0s for pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00" in namespace "secrets-1734" to be "success or failure" Feb 5 21:47:15.829: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00": Phase="Pending", Reason="", readiness=false. Elapsed: 5.838105ms Feb 5 21:47:17.836: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012903181s Feb 5 21:47:19.844: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02041949s Feb 5 21:47:21.854: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030712039s Feb 5 21:47:23.865: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041367395s STEP: Saw pod success Feb 5 21:47:23.865: INFO: Pod "pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00" satisfied condition "success or failure" Feb 5 21:47:23.873: INFO: Trying to get logs from node jerma-node pod pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00 container secret-volume-test: STEP: delete the pod Feb 5 21:47:23.958: INFO: Waiting for pod pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00 to disappear Feb 5 21:47:23.965: INFO: Pod pod-secrets-8b0601e2-aa4d-4f54-bd4a-c09e133b4b00 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:47:23.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1734" for this suite. • [SLOW TEST:8.387 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1617,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:47:24.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Feb 5 21:47:24.149: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 5 21:47:24.162: INFO: Waiting for terminating namespaces to be deleted... Feb 5 21:47:24.165: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 5 21:47:24.173: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.173: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 21:47:24.173: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 5 21:47:24.173: INFO: Container weave ready: true, restart count 1 Feb 5 21:47:24.173: INFO: Container weave-npc ready: true, restart count 0 Feb 5 21:47:24.173: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 5 21:47:24.190: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container etcd ready: true, restart count 1 Feb 5 21:47:24.191: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container kube-apiserver ready: true, restart count 1 Feb 5 21:47:24.191: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container coredns ready: true, restart count 0 Feb 5 21:47:24.191: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container coredns ready: true, restart count 0 Feb 5 21:47:24.191: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container kube-proxy ready: true, restart count 0 Feb 5 21:47:24.191: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 5 21:47:24.191: INFO: Container weave ready: true, restart count 0 Feb 5 21:47:24.191: INFO: Container weave-npc ready: true, restart count 0 Feb 5 21:47:24.191: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container kube-controller-manager ready: true, restart count 3 Feb 5 21:47:24.191: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 5 21:47:24.191: INFO: Container kube-scheduler ready: true, restart count 5 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5a66604a-31ae-416f-92af-ef4a0d140f0c 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5a66604a-31ae-416f-92af-ef4a0d140f0c off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-5a66604a-31ae-416f-92af-ef4a0d140f0c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:52:42.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5086" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:318.568 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":112,"skipped":1630,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:52:42.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:52:48.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5532" for this suite. • [SLOW TEST:6.483 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":113,"skipped":1630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:52:49.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 5 21:52:49.226: INFO: >>> kubeConfig: /root/.kube/config Feb 5 21:52:52.139: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:53:03.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6235" for this suite. • [SLOW TEST:14.604 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":114,"skipped":1654,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:53:03.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:53:03.783: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 5 21:53:06.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5761 create -f -' Feb 5 21:53:09.017: INFO: stderr: "" Feb 5 21:53:09.017: INFO: stdout: "e2e-test-crd-publish-openapi-6136-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 5 21:53:09.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5761 delete e2e-test-crd-publish-openapi-6136-crds test-cr' Feb 5 21:53:09.157: INFO: stderr: "" Feb 5 21:53:09.158: INFO: stdout: "e2e-test-crd-publish-openapi-6136-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 5 21:53:09.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5761 apply -f -' Feb 5 21:53:09.464: INFO: stderr: "" Feb 5 21:53:09.464: INFO: stdout: "e2e-test-crd-publish-openapi-6136-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 5 21:53:09.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5761 delete e2e-test-crd-publish-openapi-6136-crds test-cr' Feb 5 21:53:09.565: INFO: stderr: "" Feb 5 21:53:09.565: INFO: stdout: "e2e-test-crd-publish-openapi-6136-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 5 21:53:09.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6136-crds' Feb 5 21:53:09.819: INFO: stderr: "" Feb 5 21:53:09.819: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6136-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:53:11.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5761" for this suite. • [SLOW TEST:8.048 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":115,"skipped":1666,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:53:11.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1768 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 5 21:53:11.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6574' Feb 5 21:53:11.980: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 5 21:53:11.980: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1773 Feb 5 21:53:11.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6574' Feb 5 21:53:12.194: INFO: stderr: "" Feb 5 21:53:12.194: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:53:12.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6574" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":116,"skipped":1670,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:53:12.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7240 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7240 I0205 21:53:12.654427 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7240, replica count: 2 I0205 21:53:15.706300 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:53:18.706584 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:53:21.706985 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:53:24.707452 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:53:27.707885 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 5 21:53:27.708: INFO: Creating new exec pod Feb 5 21:53:36.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7240 execpodjzl27 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 5 21:53:37.088: INFO: stderr: "I0205 21:53:36.909334 2179 log.go:172] (0xc00090c580) (0xc00069dea0) Create stream\nI0205 21:53:36.909463 2179 log.go:172] (0xc00090c580) (0xc00069dea0) Stream added, broadcasting: 1\nI0205 21:53:36.916511 2179 log.go:172] (0xc00090c580) Reply frame received for 1\nI0205 21:53:36.916562 2179 log.go:172] (0xc00090c580) (0xc00063a6e0) Create stream\nI0205 21:53:36.916579 2179 log.go:172] (0xc00090c580) (0xc00063a6e0) Stream added, broadcasting: 3\nI0205 21:53:36.918939 2179 log.go:172] (0xc00090c580) Reply frame received for 3\nI0205 21:53:36.918969 2179 log.go:172] (0xc00090c580) (0xc00069df40) Create stream\nI0205 21:53:36.918991 2179 log.go:172] (0xc00090c580) (0xc00069df40) Stream added, broadcasting: 5\nI0205 21:53:36.922249 2179 log.go:172] (0xc00090c580) Reply frame received for 5\nI0205 21:53:37.005557 2179 log.go:172] (0xc00090c580) Data frame received for 5\nI0205 21:53:37.005609 2179 log.go:172] (0xc00069df40) (5) Data frame handling\nI0205 21:53:37.005641 2179 log.go:172] (0xc00069df40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0205 21:53:37.012086 2179 log.go:172] (0xc00090c580) Data frame received for 5\nI0205 21:53:37.012116 2179 log.go:172] (0xc00069df40) (5) Data frame handling\nI0205 21:53:37.012131 2179 log.go:172] (0xc00069df40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0205 21:53:37.080128 2179 log.go:172] (0xc00090c580) Data frame received for 1\nI0205 21:53:37.080231 2179 log.go:172] (0xc00090c580) (0xc00063a6e0) Stream removed, broadcasting: 3\nI0205 21:53:37.080271 2179 log.go:172] (0xc00069dea0) (1) Data frame handling\nI0205 21:53:37.080285 2179 log.go:172] (0xc00069dea0) (1) Data frame sent\nI0205 21:53:37.080301 2179 log.go:172] (0xc00090c580) (0xc00069df40) Stream removed, broadcasting: 5\nI0205 21:53:37.080314 2179 log.go:172] (0xc00090c580) (0xc00069dea0) Stream removed, broadcasting: 1\nI0205 21:53:37.080322 2179 log.go:172] (0xc00090c580) Go away received\nI0205 21:53:37.080977 2179 log.go:172] (0xc00090c580) (0xc00069dea0) Stream removed, broadcasting: 1\nI0205 21:53:37.080994 2179 log.go:172] (0xc00090c580) (0xc00063a6e0) Stream removed, broadcasting: 3\nI0205 21:53:37.081001 2179 log.go:172] (0xc00090c580) (0xc00069df40) Stream removed, broadcasting: 5\n" Feb 5 21:53:37.088: INFO: stdout: "" Feb 5 21:53:37.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7240 execpodjzl27 -- /bin/sh -x -c nc -zv -t -w 2 10.96.67.134 80' Feb 5 21:53:37.418: INFO: stderr: "I0205 21:53:37.249093 2200 log.go:172] (0xc000a1ed10) (0xc000a5eaa0) Create stream\nI0205 21:53:37.249251 2200 log.go:172] (0xc000a1ed10) (0xc000a5eaa0) Stream added, broadcasting: 1\nI0205 21:53:37.256906 2200 log.go:172] (0xc000a1ed10) Reply frame received for 1\nI0205 21:53:37.256975 2200 log.go:172] (0xc000a1ed10) (0xc00058e780) Create stream\nI0205 21:53:37.256988 2200 log.go:172] (0xc000a1ed10) (0xc00058e780) Stream added, broadcasting: 3\nI0205 21:53:37.258196 2200 log.go:172] (0xc000a1ed10) Reply frame received for 3\nI0205 21:53:37.258239 2200 log.go:172] (0xc000a1ed10) (0xc00070cb40) Create stream\nI0205 21:53:37.258248 2200 log.go:172] (0xc000a1ed10) (0xc00070cb40) Stream added, broadcasting: 5\nI0205 21:53:37.260223 2200 log.go:172] (0xc000a1ed10) Reply frame received for 5\nI0205 21:53:37.323652 2200 log.go:172] (0xc000a1ed10) Data frame received for 5\nI0205 21:53:37.323729 2200 log.go:172] (0xc00070cb40) (5) Data frame handling\nI0205 21:53:37.323753 2200 log.go:172] (0xc00070cb40) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.67.134 80\nI0205 21:53:37.326463 2200 log.go:172] (0xc000a1ed10) Data frame received for 5\nI0205 21:53:37.326492 2200 log.go:172] (0xc00070cb40) (5) Data frame handling\nI0205 21:53:37.326503 2200 log.go:172] (0xc00070cb40) (5) Data frame sent\nConnection to 10.96.67.134 80 port [tcp/http] succeeded!\nI0205 21:53:37.402463 2200 log.go:172] (0xc000a1ed10) Data frame received for 1\nI0205 21:53:37.402643 2200 log.go:172] (0xc000a5eaa0) (1) Data frame handling\nI0205 21:53:37.402687 2200 log.go:172] (0xc000a5eaa0) (1) Data frame sent\nI0205 21:53:37.405331 2200 log.go:172] (0xc000a1ed10) (0xc00070cb40) Stream removed, broadcasting: 5\nI0205 21:53:37.405478 2200 log.go:172] (0xc000a1ed10) (0xc000a5eaa0) Stream removed, broadcasting: 1\nI0205 21:53:37.406261 2200 log.go:172] (0xc000a1ed10) (0xc00058e780) Stream removed, broadcasting: 3\nI0205 21:53:37.406291 2200 log.go:172] (0xc000a1ed10) Go away received\nI0205 21:53:37.406493 2200 log.go:172] (0xc000a1ed10) (0xc000a5eaa0) Stream removed, broadcasting: 1\nI0205 21:53:37.406513 2200 log.go:172] (0xc000a1ed10) (0xc00058e780) Stream removed, broadcasting: 3\nI0205 21:53:37.406527 2200 log.go:172] (0xc000a1ed10) (0xc00070cb40) Stream removed, broadcasting: 5\n" Feb 5 21:53:37.419: INFO: stdout: "" Feb 5 21:53:37.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7240 execpodjzl27 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32270' Feb 5 21:53:37.722: INFO: stderr: "I0205 21:53:37.552172 2223 log.go:172] (0xc00044a000) (0xc00091a000) Create stream\nI0205 21:53:37.552267 2223 log.go:172] (0xc00044a000) (0xc00091a000) Stream added, broadcasting: 1\nI0205 21:53:37.555115 2223 log.go:172] (0xc00044a000) Reply frame received for 1\nI0205 21:53:37.555137 2223 log.go:172] (0xc00044a000) (0xc00091a0a0) Create stream\nI0205 21:53:37.555142 2223 log.go:172] (0xc00044a000) (0xc00091a0a0) Stream added, broadcasting: 3\nI0205 21:53:37.556174 2223 log.go:172] (0xc00044a000) Reply frame received for 3\nI0205 21:53:37.556194 2223 log.go:172] (0xc00044a000) (0xc00065fb80) Create stream\nI0205 21:53:37.556202 2223 log.go:172] (0xc00044a000) (0xc00065fb80) Stream added, broadcasting: 5\nI0205 21:53:37.557331 2223 log.go:172] (0xc00044a000) Reply frame received for 5\nI0205 21:53:37.626419 2223 log.go:172] (0xc00044a000) Data frame received for 5\nI0205 21:53:37.626642 2223 log.go:172] (0xc00065fb80) (5) Data frame handling\nI0205 21:53:37.626698 2223 log.go:172] (0xc00065fb80) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32270\nI0205 21:53:37.628971 2223 log.go:172] (0xc00044a000) Data frame received for 5\nI0205 21:53:37.629037 2223 log.go:172] (0xc00065fb80) (5) Data frame handling\nI0205 21:53:37.629074 2223 log.go:172] (0xc00065fb80) (5) Data frame sent\nConnection to 10.96.2.250 32270 port [tcp/32270] succeeded!\nI0205 21:53:37.713572 2223 log.go:172] (0xc00044a000) Data frame received for 1\nI0205 21:53:37.713669 2223 log.go:172] (0xc00091a000) (1) Data frame handling\nI0205 21:53:37.713703 2223 log.go:172] (0xc00091a000) (1) Data frame sent\nI0205 21:53:37.713988 2223 log.go:172] (0xc00044a000) (0xc00091a0a0) Stream removed, broadcasting: 3\nI0205 21:53:37.714035 2223 log.go:172] (0xc00044a000) (0xc00091a000) Stream removed, broadcasting: 1\nI0205 21:53:37.714853 2223 log.go:172] (0xc00044a000) (0xc00065fb80) Stream removed, broadcasting: 5\nI0205 21:53:37.714918 2223 log.go:172] (0xc00044a000) Go away received\nI0205 21:53:37.715022 2223 log.go:172] (0xc00044a000) (0xc00091a000) Stream removed, broadcasting: 1\nI0205 21:53:37.715046 2223 log.go:172] (0xc00044a000) (0xc00091a0a0) Stream removed, broadcasting: 3\nI0205 21:53:37.715085 2223 log.go:172] (0xc00044a000) (0xc00065fb80) Stream removed, broadcasting: 5\n" Feb 5 21:53:37.722: INFO: stdout: "" Feb 5 21:53:37.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7240 execpodjzl27 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32270' Feb 5 21:53:38.052: INFO: stderr: "I0205 21:53:37.888593 2243 log.go:172] (0xc000592160) (0xc00063de00) Create stream\nI0205 21:53:37.888766 2243 log.go:172] (0xc000592160) (0xc00063de00) Stream added, broadcasting: 1\nI0205 21:53:37.896963 2243 log.go:172] (0xc000592160) Reply frame received for 1\nI0205 21:53:37.897049 2243 log.go:172] (0xc000592160) (0xc00075b540) Create stream\nI0205 21:53:37.897113 2243 log.go:172] (0xc000592160) (0xc00075b540) Stream added, broadcasting: 3\nI0205 21:53:37.904519 2243 log.go:172] (0xc000592160) Reply frame received for 3\nI0205 21:53:37.904656 2243 log.go:172] (0xc000592160) (0xc00096c000) Create stream\nI0205 21:53:37.904731 2243 log.go:172] (0xc000592160) (0xc00096c000) Stream added, broadcasting: 5\nI0205 21:53:37.909652 2243 log.go:172] (0xc000592160) Reply frame received for 5\nI0205 21:53:37.979653 2243 log.go:172] (0xc000592160) Data frame received for 5\nI0205 21:53:37.979686 2243 log.go:172] (0xc00096c000) (5) Data frame handling\nI0205 21:53:37.979704 2243 log.go:172] (0xc00096c000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32270\nI0205 21:53:37.982166 2243 log.go:172] (0xc000592160) Data frame received for 5\nI0205 21:53:37.982181 2243 log.go:172] (0xc00096c000) (5) Data frame handling\nI0205 21:53:37.982191 2243 log.go:172] (0xc00096c000) (5) Data frame sent\nConnection to 10.96.1.234 32270 port [tcp/32270] succeeded!\nI0205 21:53:38.045427 2243 log.go:172] (0xc000592160) (0xc00075b540) Stream removed, broadcasting: 3\nI0205 21:53:38.045616 2243 log.go:172] (0xc000592160) Data frame received for 1\nI0205 21:53:38.045680 2243 log.go:172] (0xc00063de00) (1) Data frame handling\nI0205 21:53:38.045698 2243 log.go:172] (0xc00063de00) (1) Data frame sent\nI0205 21:53:38.045704 2243 log.go:172] (0xc000592160) (0xc00096c000) Stream removed, broadcasting: 5\nI0205 21:53:38.045739 2243 log.go:172] (0xc000592160) (0xc00063de00) Stream removed, broadcasting: 1\nI0205 21:53:38.045753 2243 log.go:172] (0xc000592160) Go away received\nI0205 21:53:38.046303 2243 log.go:172] (0xc000592160) (0xc00063de00) Stream removed, broadcasting: 1\nI0205 21:53:38.046314 2243 log.go:172] (0xc000592160) (0xc00075b540) Stream removed, broadcasting: 3\nI0205 21:53:38.046318 2243 log.go:172] (0xc000592160) (0xc00096c000) Stream removed, broadcasting: 5\n" Feb 5 21:53:38.052: INFO: stdout: "" Feb 5 21:53:38.052: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:53:38.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7240" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.947 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":117,"skipped":1674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:53:38.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 21:53:38.793: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 21:53:40.813: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:53:42.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:53:44.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:53:47.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:53:48.822: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536418, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 21:53:52.009: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:53:52.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8195" for this suite. STEP: Destroying namespace "webhook-8195-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.587 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":118,"skipped":1703,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:53:52.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 5 21:53:52.853: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:54:06.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8713" for this suite. • [SLOW TEST:14.256 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1713,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:54:06.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9d930122-7b65-4a62-a65d-685ef41ee053 STEP: Creating a pod to test consume configMaps Feb 5 21:54:07.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb" in namespace "projected-7281" to be "success or failure" Feb 5 21:54:07.076: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412292ms Feb 5 21:54:09.083: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011861363s Feb 5 21:54:11.097: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025678448s Feb 5 21:54:13.104: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032455872s Feb 5 21:54:15.111: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039445454s STEP: Saw pod success Feb 5 21:54:15.111: INFO: Pod "pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb" satisfied condition "success or failure" Feb 5 21:54:15.117: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb container projected-configmap-volume-test: STEP: delete the pod Feb 5 21:54:15.148: INFO: Waiting for pod pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb to disappear Feb 5 21:54:15.160: INFO: Pod pod-projected-configmaps-7f6fd5be-5e2f-4d55-b00a-19061dffd5eb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:54:15.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7281" for this suite. • [SLOW TEST:8.182 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1726,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:54:15.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6830.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6830.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6830.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6830.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6830.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6830.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 21:54:27.362: INFO: DNS probes using dns-6830/dns-test-ea32fb15-0cfc-4307-961a-cd365694dd7c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:54:29.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6830" for this suite. • [SLOW TEST:13.946 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":121,"skipped":1730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:54:29.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 5 21:54:29.333: INFO: Number of nodes with available pods: 0 Feb 5 21:54:29.333: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:30.839: INFO: Number of nodes with available pods: 0 Feb 5 21:54:30.839: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:31.515: INFO: Number of nodes with available pods: 0 Feb 5 21:54:31.515: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:32.597: INFO: Number of nodes with available pods: 0 Feb 5 21:54:32.597: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:33.346: INFO: Number of nodes with available pods: 0 Feb 5 21:54:33.346: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:34.378: INFO: Number of nodes with available pods: 0 Feb 5 21:54:34.378: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:36.585: INFO: Number of nodes with available pods: 0 Feb 5 21:54:36.585: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:38.307: INFO: Number of nodes with available pods: 0 Feb 5 21:54:38.307: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:38.345: INFO: Number of nodes with available pods: 0 Feb 5 21:54:38.345: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:39.351: INFO: Number of nodes with available pods: 0 Feb 5 21:54:39.351: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:40.347: INFO: Number of nodes with available pods: 0 Feb 5 21:54:40.347: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:41.344: INFO: Number of nodes with available pods: 2 Feb 5 21:54:41.344: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 5 21:54:41.385: INFO: Number of nodes with available pods: 1 Feb 5 21:54:41.386: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:42.402: INFO: Number of nodes with available pods: 1 Feb 5 21:54:42.402: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:43.397: INFO: Number of nodes with available pods: 1 Feb 5 21:54:43.397: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:44.395: INFO: Number of nodes with available pods: 1 Feb 5 21:54:44.395: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:45.399: INFO: Number of nodes with available pods: 1 Feb 5 21:54:45.399: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:46.401: INFO: Number of nodes with available pods: 1 Feb 5 21:54:46.401: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:47.397: INFO: Number of nodes with available pods: 1 Feb 5 21:54:47.397: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:48.398: INFO: Number of nodes with available pods: 1 Feb 5 21:54:48.398: INFO: Node jerma-node is running more than one daemon pod Feb 5 21:54:49.414: INFO: Number of nodes with available pods: 2 Feb 5 21:54:49.414: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4720, will wait for the garbage collector to delete the pods Feb 5 21:54:49.543: INFO: Deleting DaemonSet.extensions daemon-set took: 16.570126ms Feb 5 21:54:49.943: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.744723ms Feb 5 21:55:02.466: INFO: Number of nodes with available pods: 0 Feb 5 21:55:02.466: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 21:55:02.469: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4720/daemonsets","resourceVersion":"6613307"},"items":null} Feb 5 21:55:02.471: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4720/pods","resourceVersion":"6613307"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:02.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4720" for this suite. • [SLOW TEST:33.374 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":122,"skipped":1760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:02.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:02.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8689" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1788,"failed":0} SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:02.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 21:55:02.985: INFO: Creating deployment "test-recreate-deployment" Feb 5 21:55:02.994: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 5 21:55:03.093: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 5 21:55:05.102: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 5 21:55:05.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:07.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:09.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536503, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:11.112: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 5 21:55:11.124: INFO: Updating deployment test-recreate-deployment Feb 5 21:55:11.124: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 5 21:55:11.734: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6691 /apis/apps/v1/namespaces/deployment-6691/deployments/test-recreate-deployment cf7f8d11-5501-486d-b709-7ea79aa1695c 6613411 2 2020-02-05 21:55:02 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00361a0a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-05 21:55:11 +0000 UTC,LastTransitionTime:2020-02-05 21:55:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-05 21:55:11 +0000 UTC,LastTransitionTime:2020-02-05 21:55:03 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Feb 5 21:55:11.979: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6691 /apis/apps/v1/namespaces/deployment-6691/replicasets/test-recreate-deployment-5f94c574ff 16ed2ab9-32de-4705-87e6-a8c5be1b789f 6613409 1 2020-02-05 21:55:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment cf7f8d11-5501-486d-b709-7ea79aa1695c 0xc0035d6127 0xc0035d6128}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035d6188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 5 21:55:11.979: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 5 21:55:11.979: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-6691 /apis/apps/v1/namespaces/deployment-6691/replicasets/test-recreate-deployment-799c574856 4085dc54-d3d0-4bb7-b3c8-61fa4723bedf 6613401 2 2020-02-05 21:55:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment cf7f8d11-5501-486d-b709-7ea79aa1695c 0xc0035d61f7 0xc0035d61f8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035d6268 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 5 21:55:11.986: INFO: Pod "test-recreate-deployment-5f94c574ff-57rfr" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-57rfr test-recreate-deployment-5f94c574ff- deployment-6691 /api/v1/namespaces/deployment-6691/pods/test-recreate-deployment-5f94c574ff-57rfr 774c4d79-832a-404a-9666-1e6248adf259 6613412 0 2020-02-05 21:55:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 16ed2ab9-32de-4705-87e6-a8c5be1b789f 0xc0035d66c7 0xc0035d66c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tf2r6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tf2r6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tf2r6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:55:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:55:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 21:55:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 21:55:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:11.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6691" for this suite. • [SLOW TEST:9.140 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":124,"skipped":1791,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:12.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9becf414-ea6a-40d5-98cc-31962531706d STEP: Creating a pod to test consume configMaps Feb 5 21:55:12.371: INFO: Waiting up to 5m0s for pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d" in namespace "configmap-3099" to be "success or failure" Feb 5 21:55:12.398: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.673645ms Feb 5 21:55:14.407: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035618213s Feb 5 21:55:16.414: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042845077s Feb 5 21:55:18.425: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053391521s Feb 5 21:55:20.431: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059433273s Feb 5 21:55:22.435: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063285615s STEP: Saw pod success Feb 5 21:55:22.435: INFO: Pod "pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d" satisfied condition "success or failure" Feb 5 21:55:22.437: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d container configmap-volume-test: STEP: delete the pod Feb 5 21:55:22.470: INFO: Waiting for pod pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d to disappear Feb 5 21:55:22.483: INFO: Pod pod-configmaps-e720538e-1410-4b63-9c7b-bae44bd7fd5d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:22.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3099" for this suite. • [SLOW TEST:10.439 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":1802,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:22.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 5 21:55:22.562: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Feb 5 21:55:23.125: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 5 21:55:26.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:28.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:30.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:32.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:34.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536526, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536523, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 21:55:37.380: INFO: Waited 921.385343ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:37.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5491" for this suite. • [SLOW TEST:15.501 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":126,"skipped":1810,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:37.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 5 21:55:48.999: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c7787b39-8c2f-4e5c-adbc-2c762d79b897" Feb 5 21:55:48.999: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c7787b39-8c2f-4e5c-adbc-2c762d79b897" in namespace "pods-2588" to be "terminated due to deadline exceeded" Feb 5 21:55:49.007: INFO: Pod "pod-update-activedeadlineseconds-c7787b39-8c2f-4e5c-adbc-2c762d79b897": Phase="Running", Reason="", readiness=true. Elapsed: 8.355232ms Feb 5 21:55:51.021: INFO: Pod "pod-update-activedeadlineseconds-c7787b39-8c2f-4e5c-adbc-2c762d79b897": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021733232s Feb 5 21:55:51.021: INFO: Pod "pod-update-activedeadlineseconds-c7787b39-8c2f-4e5c-adbc-2c762d79b897" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:55:51.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2588" for this suite. • [SLOW TEST:13.066 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":1818,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:55:51.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 5 21:55:51.261: INFO: >>> kubeConfig: /root/.kube/config Feb 5 21:55:54.479: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:56:06.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1425" for this suite. • [SLOW TEST:15.116 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":128,"skipped":1828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:56:06.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:56:06.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5696" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":129,"skipped":1873,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:56:06.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4b86fa21-a45b-4044-a3d2-41c7aa9e2873 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4b86fa21-a45b-4044-a3d2-41c7aa9e2873 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:57:29.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1388" for this suite. • [SLOW TEST:83.405 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":1877,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:57:29.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:58:01.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3298" for this suite. • [SLOW TEST:32.178 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":131,"skipped":1880,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:58:01.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 5 21:58:10.141: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 5 21:58:25.288: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:58:25.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6262" for this suite. • [SLOW TEST:23.427 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":132,"skipped":1893,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:58:25.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1458 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1458 STEP: creating replication controller externalsvc in namespace services-1458 I0205 21:58:25.595837 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1458, replica count: 2 I0205 21:58:28.646468 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:58:31.647002 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:58:34.647563 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 21:58:37.648065 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 5 21:58:37.877: INFO: Creating new exec pod Feb 5 21:58:45.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1458 execpodtb58c -- /bin/sh -x -c nslookup nodeport-service' Feb 5 21:58:46.559: INFO: stderr: "I0205 21:58:46.125574 2282 log.go:172] (0xc0009d1ce0) (0xc000a38b40) Create stream\nI0205 21:58:46.125678 2282 log.go:172] (0xc0009d1ce0) (0xc000a38b40) Stream added, broadcasting: 1\nI0205 21:58:46.135360 2282 log.go:172] (0xc0009d1ce0) Reply frame received for 1\nI0205 21:58:46.135418 2282 log.go:172] (0xc0009d1ce0) (0xc000658780) Create stream\nI0205 21:58:46.135432 2282 log.go:172] (0xc0009d1ce0) (0xc000658780) Stream added, broadcasting: 3\nI0205 21:58:46.136577 2282 log.go:172] (0xc0009d1ce0) Reply frame received for 3\nI0205 21:58:46.136621 2282 log.go:172] (0xc0009d1ce0) (0xc000727540) Create stream\nI0205 21:58:46.136637 2282 log.go:172] (0xc0009d1ce0) (0xc000727540) Stream added, broadcasting: 5\nI0205 21:58:46.138271 2282 log.go:172] (0xc0009d1ce0) Reply frame received for 5\nI0205 21:58:46.227178 2282 log.go:172] (0xc0009d1ce0) Data frame received for 5\nI0205 21:58:46.227240 2282 log.go:172] (0xc000727540) (5) Data frame handling\nI0205 21:58:46.227270 2282 log.go:172] (0xc000727540) (5) Data frame sent\n+ nslookup nodeport-service\nI0205 21:58:46.424397 2282 log.go:172] (0xc0009d1ce0) Data frame received for 3\nI0205 21:58:46.424451 2282 log.go:172] (0xc000658780) (3) Data frame handling\nI0205 21:58:46.424476 2282 log.go:172] (0xc000658780) (3) Data frame sent\nI0205 21:58:46.426481 2282 log.go:172] (0xc0009d1ce0) Data frame received for 3\nI0205 21:58:46.426494 2282 log.go:172] (0xc000658780) (3) Data frame handling\nI0205 21:58:46.426502 2282 log.go:172] (0xc000658780) (3) Data frame sent\nI0205 21:58:46.540667 2282 log.go:172] (0xc0009d1ce0) Data frame received for 1\nI0205 21:58:46.540794 2282 log.go:172] (0xc000a38b40) (1) Data frame handling\nI0205 21:58:46.540825 2282 log.go:172] (0xc000a38b40) (1) Data frame sent\nI0205 21:58:46.540848 2282 log.go:172] (0xc0009d1ce0) (0xc000a38b40) Stream removed, broadcasting: 1\nI0205 21:58:46.541160 2282 log.go:172] (0xc0009d1ce0) (0xc000727540) Stream removed, broadcasting: 5\nI0205 21:58:46.541221 2282 log.go:172] (0xc0009d1ce0) (0xc000658780) Stream removed, broadcasting: 3\nI0205 21:58:46.541416 2282 log.go:172] (0xc0009d1ce0) Go away received\nI0205 21:58:46.541720 2282 log.go:172] (0xc0009d1ce0) (0xc000a38b40) Stream removed, broadcasting: 1\nI0205 21:58:46.541736 2282 log.go:172] (0xc0009d1ce0) (0xc000658780) Stream removed, broadcasting: 3\nI0205 21:58:46.541744 2282 log.go:172] (0xc0009d1ce0) (0xc000727540) Stream removed, broadcasting: 5\n" Feb 5 21:58:46.559: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1458.svc.cluster.local\tcanonical name = externalsvc.services-1458.svc.cluster.local.\nName:\texternalsvc.services-1458.svc.cluster.local\nAddress: 10.96.66.165\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1458, will wait for the garbage collector to delete the pods Feb 5 21:58:46.622: INFO: Deleting ReplicationController externalsvc took: 6.36799ms Feb 5 21:58:46.922: INFO: Terminating ReplicationController externalsvc pods took: 300.322107ms Feb 5 21:58:57.780: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:58:57.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1458" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:32.716 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":133,"skipped":1933,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:58:58.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f48a42bc-d534-4da4-b046-aa0ffe187363 STEP: Creating a pod to test consume configMaps Feb 5 21:58:58.128: INFO: Waiting up to 5m0s for pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6" in namespace "configmap-119" to be "success or failure" Feb 5 21:58:58.177: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.968908ms Feb 5 21:59:00.359: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230839868s Feb 5 21:59:02.365: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237005744s Feb 5 21:59:04.374: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245860523s Feb 5 21:59:06.389: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.260103078s Feb 5 21:59:08.397: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.268115029s STEP: Saw pod success Feb 5 21:59:08.397: INFO: Pod "pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6" satisfied condition "success or failure" Feb 5 21:59:08.401: INFO: Trying to get logs from node jerma-node pod pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6 container configmap-volume-test: STEP: delete the pod Feb 5 21:59:08.455: INFO: Waiting for pod pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6 to disappear Feb 5 21:59:08.462: INFO: Pod pod-configmaps-08f4fcd8-8155-4b44-be25-47f59cf6b6e6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:59:08.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-119" for this suite. • [SLOW TEST:10.443 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":1948,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:59:08.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1672 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 5 21:59:08.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8104' Feb 5 21:59:08.893: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 5 21:59:08.894: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Feb 5 21:59:09.050: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 5 21:59:09.058: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 5 21:59:09.120: INFO: scanned /root for discovery docs: Feb 5 21:59:09.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8104' Feb 5 21:59:30.442: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 5 21:59:30.443: INFO: stdout: "Created e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad\nScaling up e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Feb 5 21:59:30.443: INFO: stdout: "Created e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad\nScaling up e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Feb 5 21:59:30.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8104' Feb 5 21:59:30.651: INFO: stderr: "" Feb 5 21:59:30.651: INFO: stdout: "e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad-9w75c " Feb 5 21:59:30.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad-9w75c -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8104' Feb 5 21:59:30.761: INFO: stderr: "" Feb 5 21:59:30.761: INFO: stdout: "true" Feb 5 21:59:30.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad-9w75c -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8104' Feb 5 21:59:30.841: INFO: stderr: "" Feb 5 21:59:30.841: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Feb 5 21:59:30.841: INFO: e2e-test-httpd-rc-8c9742aaa27697bd1493eba72bee16ad-9w75c is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1678 Feb 5 21:59:30.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8104' Feb 5 21:59:30.982: INFO: stderr: "" Feb 5 21:59:30.982: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 21:59:30.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8104" for this suite. • [SLOW TEST:22.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1667 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":135,"skipped":1955,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 21:59:31.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-s46q STEP: Creating a pod to test atomic-volume-subpath Feb 5 21:59:31.168: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s46q" in namespace "subpath-3586" to be "success or failure" Feb 5 21:59:31.178: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Pending", Reason="", readiness=false. Elapsed: 9.652213ms Feb 5 21:59:33.184: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015448058s Feb 5 21:59:35.190: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021577829s Feb 5 21:59:37.197: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028415032s Feb 5 21:59:39.203: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 8.034703851s Feb 5 21:59:41.212: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 10.042918576s Feb 5 21:59:43.218: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 12.049195824s Feb 5 21:59:45.225: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 14.056077085s Feb 5 21:59:47.232: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 16.063157795s Feb 5 21:59:49.238: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 18.06902799s Feb 5 21:59:51.248: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 20.078990538s Feb 5 21:59:53.257: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 22.088011015s Feb 5 21:59:55.264: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 24.095455793s Feb 5 21:59:57.273: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 26.104171813s Feb 5 21:59:59.287: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Running", Reason="", readiness=true. Elapsed: 28.118370401s Feb 5 22:00:01.294: INFO: Pod "pod-subpath-test-configmap-s46q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.125879228s STEP: Saw pod success Feb 5 22:00:01.295: INFO: Pod "pod-subpath-test-configmap-s46q" satisfied condition "success or failure" Feb 5 22:00:01.299: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-s46q container test-container-subpath-configmap-s46q: STEP: delete the pod Feb 5 22:00:01.404: INFO: Waiting for pod pod-subpath-test-configmap-s46q to disappear Feb 5 22:00:01.442: INFO: Pod pod-subpath-test-configmap-s46q no longer exists STEP: Deleting pod pod-subpath-test-configmap-s46q Feb 5 22:00:01.442: INFO: Deleting pod "pod-subpath-test-configmap-s46q" in namespace "subpath-3586" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:00:01.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3586" for this suite. • [SLOW TEST:30.417 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":136,"skipped":1976,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:00:01.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0205 22:00:43.146771 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 5 22:00:43.146: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:00:43.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7726" for this suite. • [SLOW TEST:41.699 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":137,"skipped":1981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:00:43.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-760 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-760;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-760 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-760;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-760.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-760.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-760.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-760.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-760.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-760.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-760.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-760.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.234.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.234.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.234.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.234.120_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-760 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-760;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-760 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-760;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-760.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-760.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-760.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-760.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-760.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-760.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-760.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-760.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-760.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-760.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 120.234.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.234.120_udp@PTR;check="$$(dig +tcp +noall +answer +search 120.234.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.234.120_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 22:01:07.457: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.463: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.470: INFO: Unable to read wheezy_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.478: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.495: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.498: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.545: INFO: Unable to read jessie_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.550: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.553: INFO: Unable to read jessie_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.556: INFO: Unable to read jessie_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.561: INFO: Unable to read jessie_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.568: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.571: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:07.594: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-760 wheezy_tcp@dns-test-service.dns-760 wheezy_udp@dns-test-service.dns-760.svc wheezy_tcp@dns-test-service.dns-760.svc wheezy_udp@_http._tcp.dns-test-service.dns-760.svc wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-760 jessie_tcp@dns-test-service.dns-760 jessie_udp@dns-test-service.dns-760.svc jessie_tcp@dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc jessie_tcp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:12.619: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.628: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.642: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.654: INFO: Unable to read wheezy_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.663: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.668: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.672: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.705: INFO: Unable to read jessie_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.711: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.716: INFO: Unable to read jessie_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.726: INFO: Unable to read jessie_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.731: INFO: Unable to read jessie_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.738: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.742: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:12.766: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-760 wheezy_tcp@dns-test-service.dns-760 wheezy_udp@dns-test-service.dns-760.svc wheezy_tcp@dns-test-service.dns-760.svc wheezy_udp@_http._tcp.dns-test-service.dns-760.svc wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-760 jessie_tcp@dns-test-service.dns-760 jessie_udp@dns-test-service.dns-760.svc jessie_tcp@dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc jessie_tcp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:17.605: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.613: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.623: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.629: INFO: Unable to read wheezy_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.635: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.640: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.645: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.682: INFO: Unable to read jessie_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.688: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.695: INFO: Unable to read jessie_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.701: INFO: Unable to read jessie_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.707: INFO: Unable to read jessie_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.713: INFO: Unable to read jessie_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.719: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.724: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:17.761: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-760 wheezy_tcp@dns-test-service.dns-760 wheezy_udp@dns-test-service.dns-760.svc wheezy_tcp@dns-test-service.dns-760.svc wheezy_udp@_http._tcp.dns-test-service.dns-760.svc wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-760 jessie_tcp@dns-test-service.dns-760 jessie_udp@dns-test-service.dns-760.svc jessie_tcp@dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc jessie_tcp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:22.611: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.624: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.628: INFO: Unable to read wheezy_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.632: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.636: INFO: Unable to read wheezy_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.642: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.646: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.672: INFO: Unable to read jessie_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.676: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.680: INFO: Unable to read jessie_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.682: INFO: Unable to read jessie_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.685: INFO: Unable to read jessie_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.688: INFO: Unable to read jessie_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.692: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.696: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:22.716: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-760 wheezy_tcp@dns-test-service.dns-760 wheezy_udp@dns-test-service.dns-760.svc wheezy_tcp@dns-test-service.dns-760.svc wheezy_udp@_http._tcp.dns-test-service.dns-760.svc wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-760 jessie_tcp@dns-test-service.dns-760 jessie_udp@dns-test-service.dns-760.svc jessie_tcp@dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc jessie_tcp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:27.604: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.609: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.613: INFO: Unable to read wheezy_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.617: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.624: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.646: INFO: Unable to read jessie_udp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.648: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.651: INFO: Unable to read jessie_udp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-760 from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.657: INFO: Unable to read jessie_udp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.661: INFO: Unable to read jessie_tcp@dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.664: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.668: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:27.698: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-760 wheezy_tcp@dns-test-service.dns-760 wheezy_udp@dns-test-service.dns-760.svc wheezy_tcp@dns-test-service.dns-760.svc wheezy_udp@_http._tcp.dns-test-service.dns-760.svc wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-760 jessie_tcp@dns-test-service.dns-760 jessie_udp@dns-test-service.dns-760.svc jessie_tcp@dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc jessie_tcp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:32.800: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:32.948: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-760.svc from pod dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5: the server could not find the requested resource (get pods dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5) Feb 5 22:01:33.000: INFO: Lookups using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-760.svc jessie_udp@_http._tcp.dns-test-service.dns-760.svc] Feb 5 22:01:37.797: INFO: DNS probes using dns-760/dns-test-215adeca-b7fd-4bc9-849b-70cdf75bf0e5 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:01:38.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-760" for this suite. • [SLOW TEST:55.183 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":138,"skipped":2022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:01:38.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-d5sd STEP: Creating a pod to test atomic-volume-subpath Feb 5 22:01:38.563: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-d5sd" in namespace "subpath-8335" to be "success or failure" Feb 5 22:01:38.573: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.075266ms Feb 5 22:01:40.586: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021712572s Feb 5 22:01:42.597: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033301575s Feb 5 22:01:44.618: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054187778s Feb 5 22:01:46.627: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062837079s Feb 5 22:01:48.633: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 10.068920004s Feb 5 22:01:50.638: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 12.07401s Feb 5 22:01:52.647: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 14.082746299s Feb 5 22:01:54.653: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 16.089491231s Feb 5 22:01:56.660: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 18.096609085s Feb 5 22:01:58.666: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 20.102453463s Feb 5 22:02:00.675: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 22.110925879s Feb 5 22:02:02.681: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 24.11766844s Feb 5 22:02:04.733: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 26.169270926s Feb 5 22:02:06.745: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 28.181539933s Feb 5 22:02:08.757: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Running", Reason="", readiness=true. Elapsed: 30.193354095s Feb 5 22:02:10.766: INFO: Pod "pod-subpath-test-projected-d5sd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.20257139s STEP: Saw pod success Feb 5 22:02:10.767: INFO: Pod "pod-subpath-test-projected-d5sd" satisfied condition "success or failure" Feb 5 22:02:10.772: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-d5sd container test-container-subpath-projected-d5sd: STEP: delete the pod Feb 5 22:02:10.920: INFO: Waiting for pod pod-subpath-test-projected-d5sd to disappear Feb 5 22:02:10.927: INFO: Pod pod-subpath-test-projected-d5sd no longer exists STEP: Deleting pod pod-subpath-test-projected-d5sd Feb 5 22:02:10.927: INFO: Deleting pod "pod-subpath-test-projected-d5sd" in namespace "subpath-8335" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:02:10.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8335" for this suite. • [SLOW TEST:32.591 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":139,"skipped":2062,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:02:10.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1aa5f665-39ab-4139-b240-0b5c5b95ae9b STEP: Creating a pod to test consume configMaps Feb 5 22:02:11.112: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677" in namespace "projected-6535" to be "success or failure" Feb 5 22:02:11.133: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677": Phase="Pending", Reason="", readiness=false. Elapsed: 20.744892ms Feb 5 22:02:13.140: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02760537s Feb 5 22:02:15.145: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032694872s Feb 5 22:02:17.150: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037223246s Feb 5 22:02:19.156: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043667285s STEP: Saw pod success Feb 5 22:02:19.156: INFO: Pod "pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677" satisfied condition "success or failure" Feb 5 22:02:19.160: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677 container projected-configmap-volume-test: STEP: delete the pod Feb 5 22:02:19.387: INFO: Waiting for pod pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677 to disappear Feb 5 22:02:19.401: INFO: Pod pod-projected-configmaps-8e934703-73a1-467a-9d76-c1c64ec09677 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:02:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6535" for this suite. • [SLOW TEST:8.475 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:02:19.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Feb 5 22:02:19.568: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:02:31.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7782" for this suite. • [SLOW TEST:11.675 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":141,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:02:31.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:02:31.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4146" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":142,"skipped":2151,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:02:31.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-c7b620d2-4901-4350-be72-502a73e10785 in namespace container-probe-7567 Feb 5 22:02:39.627: INFO: Started pod liveness-c7b620d2-4901-4350-be72-502a73e10785 in namespace container-probe-7567 STEP: checking the pod's current state and verifying that restartCount is present Feb 5 22:02:39.632: INFO: Initial restart count of pod liveness-c7b620d2-4901-4350-be72-502a73e10785 is 0 Feb 5 22:02:58.542: INFO: Restart count of pod container-probe-7567/liveness-c7b620d2-4901-4350-be72-502a73e10785 is now 1 (18.909191236s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:02:58.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7567" for this suite. • [SLOW TEST:27.145 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2162,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:02:58.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:03:05.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2440" for this suite. • [SLOW TEST:7.187 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":144,"skipped":2181,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:03:05.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 5 22:03:06.444: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 5 22:03:08.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 22:03:10.473: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 5 22:03:12.510: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716536986, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 5 22:03:15.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:03:15.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:03:16.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9790" for this suite. STEP: Destroying namespace "webhook-9790-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.311 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":145,"skipped":2201,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:03:17.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1358/configmap-test-c2fb46df-209b-4cf3-b120-a6cdc0a7eeda STEP: Creating a pod to test consume configMaps Feb 5 22:03:17.243: INFO: Waiting up to 5m0s for pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e" in namespace "configmap-1358" to be "success or failure" Feb 5 22:03:17.262: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.283853ms Feb 5 22:03:19.282: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038397289s Feb 5 22:03:21.290: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046790235s Feb 5 22:03:23.301: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057782073s Feb 5 22:03:25.311: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067251989s Feb 5 22:03:27.511: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.267498084s STEP: Saw pod success Feb 5 22:03:27.511: INFO: Pod "pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e" satisfied condition "success or failure" Feb 5 22:03:27.515: INFO: Trying to get logs from node jerma-node pod pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e container env-test: STEP: delete the pod Feb 5 22:03:27.963: INFO: Waiting for pod pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e to disappear Feb 5 22:03:28.000: INFO: Pod pod-configmaps-aff9811e-09da-49df-9fcf-a9cabdc7fd7e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:03:28.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1358" for this suite. • [SLOW TEST:10.927 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:03:28.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1362 STEP: creating the pod Feb 5 22:03:28.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-211' Feb 5 22:03:30.835: INFO: stderr: "" Feb 5 22:03:30.835: INFO: stdout: "pod/pause created\n" Feb 5 22:03:30.835: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 5 22:03:30.836: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-211" to be "running and ready" Feb 5 22:03:30.873: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 37.151244ms Feb 5 22:03:32.881: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045271708s Feb 5 22:03:34.891: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054896478s Feb 5 22:03:36.895: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.05936397s Feb 5 22:03:36.895: INFO: Pod "pause" satisfied condition "running and ready" Feb 5 22:03:36.895: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Feb 5 22:03:36.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-211' Feb 5 22:03:36.984: INFO: stderr: "" Feb 5 22:03:36.984: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 5 22:03:36.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-211' Feb 5 22:03:37.105: INFO: stderr: "" Feb 5 22:03:37.106: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 5 22:03:37.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-211' Feb 5 22:03:37.229: INFO: stderr: "" Feb 5 22:03:37.229: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 5 22:03:37.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-211' Feb 5 22:03:37.330: INFO: stderr: "" Feb 5 22:03:37.330: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1369 STEP: using delete to clean up resources Feb 5 22:03:37.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-211' Feb 5 22:03:37.540: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 5 22:03:37.540: INFO: stdout: "pod \"pause\" force deleted\n" Feb 5 22:03:37.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-211' Feb 5 22:03:37.659: INFO: stderr: "No resources found in kubectl-211 namespace.\n" Feb 5 22:03:37.660: INFO: stdout: "" Feb 5 22:03:37.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-211 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 5 22:03:37.749: INFO: stderr: "" Feb 5 22:03:37.749: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:03:37.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-211" for this suite. • [SLOW TEST:9.732 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1359 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":147,"skipped":2240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:03:37.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-2564 STEP: creating replication controller nodeport-test in namespace services-2564 I0205 22:03:41.247371 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2564, replica count: 2 I0205 22:03:44.298410 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:03:47.299048 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:03:50.299703 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:03:53.300190 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 5 22:03:53.300: INFO: Creating new exec pod Feb 5 22:04:02.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2564 execpodk2wzs -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 5 22:04:02.687: INFO: stderr: "I0205 22:04:02.473333 2573 log.go:172] (0xc000a689a0) (0xc000a60000) Create stream\nI0205 22:04:02.473560 2573 log.go:172] (0xc000a689a0) (0xc000a60000) Stream added, broadcasting: 1\nI0205 22:04:02.476061 2573 log.go:172] (0xc000a689a0) Reply frame received for 1\nI0205 22:04:02.476104 2573 log.go:172] (0xc000a689a0) (0xc000a600a0) Create stream\nI0205 22:04:02.476115 2573 log.go:172] (0xc000a689a0) (0xc000a600a0) Stream added, broadcasting: 3\nI0205 22:04:02.476973 2573 log.go:172] (0xc000a689a0) Reply frame received for 3\nI0205 22:04:02.476990 2573 log.go:172] (0xc000a689a0) (0xc0001a5a40) Create stream\nI0205 22:04:02.476997 2573 log.go:172] (0xc000a689a0) (0xc0001a5a40) Stream added, broadcasting: 5\nI0205 22:04:02.477787 2573 log.go:172] (0xc000a689a0) Reply frame received for 5\nI0205 22:04:02.567001 2573 log.go:172] (0xc000a689a0) Data frame received for 5\nI0205 22:04:02.567402 2573 log.go:172] (0xc0001a5a40) (5) Data frame handling\nI0205 22:04:02.567431 2573 log.go:172] (0xc0001a5a40) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0205 22:04:02.572641 2573 log.go:172] (0xc000a689a0) Data frame received for 5\nI0205 22:04:02.572682 2573 log.go:172] (0xc0001a5a40) (5) Data frame handling\nI0205 22:04:02.572712 2573 log.go:172] (0xc0001a5a40) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0205 22:04:02.675517 2573 log.go:172] (0xc000a689a0) Data frame received for 1\nI0205 22:04:02.675683 2573 log.go:172] (0xc000a689a0) (0xc0001a5a40) Stream removed, broadcasting: 5\nI0205 22:04:02.675734 2573 log.go:172] (0xc000a60000) (1) Data frame handling\nI0205 22:04:02.675765 2573 log.go:172] (0xc000a60000) (1) Data frame sent\nI0205 22:04:02.675795 2573 log.go:172] (0xc000a689a0) (0xc000a600a0) Stream removed, broadcasting: 3\nI0205 22:04:02.675827 2573 log.go:172] (0xc000a689a0) (0xc000a60000) Stream removed, broadcasting: 1\nI0205 22:04:02.675847 2573 log.go:172] (0xc000a689a0) Go away received\nI0205 22:04:02.676791 2573 log.go:172] (0xc000a689a0) (0xc000a60000) Stream removed, broadcasting: 1\nI0205 22:04:02.676847 2573 log.go:172] (0xc000a689a0) (0xc000a600a0) Stream removed, broadcasting: 3\nI0205 22:04:02.676858 2573 log.go:172] (0xc000a689a0) (0xc0001a5a40) Stream removed, broadcasting: 5\n" Feb 5 22:04:02.687: INFO: stdout: "" Feb 5 22:04:02.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2564 execpodk2wzs -- /bin/sh -x -c nc -zv -t -w 2 10.96.111.119 80' Feb 5 22:04:02.985: INFO: stderr: "I0205 22:04:02.828947 2586 log.go:172] (0xc00068a000) (0xc00089a000) Create stream\nI0205 22:04:02.829033 2586 log.go:172] (0xc00068a000) (0xc00089a000) Stream added, broadcasting: 1\nI0205 22:04:02.831560 2586 log.go:172] (0xc00068a000) Reply frame received for 1\nI0205 22:04:02.831593 2586 log.go:172] (0xc00068a000) (0xc000974000) Create stream\nI0205 22:04:02.831601 2586 log.go:172] (0xc00068a000) (0xc000974000) Stream added, broadcasting: 3\nI0205 22:04:02.832587 2586 log.go:172] (0xc00068a000) Reply frame received for 3\nI0205 22:04:02.832599 2586 log.go:172] (0xc00068a000) (0xc00089a0a0) Create stream\nI0205 22:04:02.832604 2586 log.go:172] (0xc00068a000) (0xc00089a0a0) Stream added, broadcasting: 5\nI0205 22:04:02.833583 2586 log.go:172] (0xc00068a000) Reply frame received for 5\nI0205 22:04:02.898439 2586 log.go:172] (0xc00068a000) Data frame received for 5\nI0205 22:04:02.898497 2586 log.go:172] (0xc00089a0a0) (5) Data frame handling\nI0205 22:04:02.898528 2586 log.go:172] (0xc00089a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.111.119 80\nI0205 22:04:02.901444 2586 log.go:172] (0xc00068a000) Data frame received for 5\nI0205 22:04:02.901492 2586 log.go:172] (0xc00089a0a0) (5) Data frame handling\nI0205 22:04:02.901510 2586 log.go:172] (0xc00089a0a0) (5) Data frame sent\nConnection to 10.96.111.119 80 port [tcp/http] succeeded!\nI0205 22:04:02.977837 2586 log.go:172] (0xc00068a000) (0xc000974000) Stream removed, broadcasting: 3\nI0205 22:04:02.978014 2586 log.go:172] (0xc00068a000) Data frame received for 1\nI0205 22:04:02.978039 2586 log.go:172] (0xc00089a000) (1) Data frame handling\nI0205 22:04:02.978091 2586 log.go:172] (0xc00089a000) (1) Data frame sent\nI0205 22:04:02.978183 2586 log.go:172] (0xc00068a000) (0xc00089a000) Stream removed, broadcasting: 1\nI0205 22:04:02.978752 2586 log.go:172] (0xc00068a000) (0xc00089a0a0) Stream removed, broadcasting: 5\nI0205 22:04:02.978940 2586 log.go:172] (0xc00068a000) Go away received\nI0205 22:04:02.979194 2586 log.go:172] (0xc00068a000) (0xc00089a000) Stream removed, broadcasting: 1\nI0205 22:04:02.979208 2586 log.go:172] (0xc00068a000) (0xc000974000) Stream removed, broadcasting: 3\nI0205 22:04:02.979212 2586 log.go:172] (0xc00068a000) (0xc00089a0a0) Stream removed, broadcasting: 5\n" Feb 5 22:04:02.985: INFO: stdout: "" Feb 5 22:04:02.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2564 execpodk2wzs -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31724' Feb 5 22:04:03.262: INFO: stderr: "I0205 22:04:03.113881 2605 log.go:172] (0xc000b6efd0) (0xc00097a6e0) Create stream\nI0205 22:04:03.114011 2605 log.go:172] (0xc000b6efd0) (0xc00097a6e0) Stream added, broadcasting: 1\nI0205 22:04:03.118138 2605 log.go:172] (0xc000b6efd0) Reply frame received for 1\nI0205 22:04:03.118176 2605 log.go:172] (0xc000b6efd0) (0xc00055c640) Create stream\nI0205 22:04:03.118183 2605 log.go:172] (0xc000b6efd0) (0xc00055c640) Stream added, broadcasting: 3\nI0205 22:04:03.119177 2605 log.go:172] (0xc000b6efd0) Reply frame received for 3\nI0205 22:04:03.119197 2605 log.go:172] (0xc000b6efd0) (0xc00076f360) Create stream\nI0205 22:04:03.119206 2605 log.go:172] (0xc000b6efd0) (0xc00076f360) Stream added, broadcasting: 5\nI0205 22:04:03.120242 2605 log.go:172] (0xc000b6efd0) Reply frame received for 5\nI0205 22:04:03.182941 2605 log.go:172] (0xc000b6efd0) Data frame received for 5\nI0205 22:04:03.183048 2605 log.go:172] (0xc00076f360) (5) Data frame handling\nI0205 22:04:03.183079 2605 log.go:172] (0xc00076f360) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31724\nI0205 22:04:03.185039 2605 log.go:172] (0xc000b6efd0) Data frame received for 5\nI0205 22:04:03.185089 2605 log.go:172] (0xc00076f360) (5) Data frame handling\nI0205 22:04:03.185111 2605 log.go:172] (0xc00076f360) (5) Data frame sent\nConnection to 10.96.2.250 31724 port [tcp/31724] succeeded!\nI0205 22:04:03.251700 2605 log.go:172] (0xc000b6efd0) (0xc00055c640) Stream removed, broadcasting: 3\nI0205 22:04:03.252191 2605 log.go:172] (0xc000b6efd0) Data frame received for 1\nI0205 22:04:03.252248 2605 log.go:172] (0xc00097a6e0) (1) Data frame handling\nI0205 22:04:03.252327 2605 log.go:172] (0xc00097a6e0) (1) Data frame sent\nI0205 22:04:03.252472 2605 log.go:172] (0xc000b6efd0) (0xc00097a6e0) Stream removed, broadcasting: 1\nI0205 22:04:03.253667 2605 log.go:172] (0xc000b6efd0) (0xc00076f360) Stream removed, broadcasting: 5\nI0205 22:04:03.253713 2605 log.go:172] (0xc000b6efd0) Go away received\nI0205 22:04:03.254266 2605 log.go:172] (0xc000b6efd0) (0xc00097a6e0) Stream removed, broadcasting: 1\nI0205 22:04:03.254397 2605 log.go:172] (0xc000b6efd0) (0xc00055c640) Stream removed, broadcasting: 3\nI0205 22:04:03.254418 2605 log.go:172] (0xc000b6efd0) (0xc00076f360) Stream removed, broadcasting: 5\n" Feb 5 22:04:03.262: INFO: stdout: "" Feb 5 22:04:03.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2564 execpodk2wzs -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31724' Feb 5 22:04:03.573: INFO: stderr: "I0205 22:04:03.383171 2625 log.go:172] (0xc0008d6bb0) (0xc0008e63c0) Create stream\nI0205 22:04:03.383259 2625 log.go:172] (0xc0008d6bb0) (0xc0008e63c0) Stream added, broadcasting: 1\nI0205 22:04:03.387547 2625 log.go:172] (0xc0008d6bb0) Reply frame received for 1\nI0205 22:04:03.387603 2625 log.go:172] (0xc0008d6bb0) (0xc00057c640) Create stream\nI0205 22:04:03.387675 2625 log.go:172] (0xc0008d6bb0) (0xc00057c640) Stream added, broadcasting: 3\nI0205 22:04:03.389201 2625 log.go:172] (0xc0008d6bb0) Reply frame received for 3\nI0205 22:04:03.389301 2625 log.go:172] (0xc0008d6bb0) (0xc0008e6000) Create stream\nI0205 22:04:03.389316 2625 log.go:172] (0xc0008d6bb0) (0xc0008e6000) Stream added, broadcasting: 5\nI0205 22:04:03.390340 2625 log.go:172] (0xc0008d6bb0) Reply frame received for 5\nI0205 22:04:03.449140 2625 log.go:172] (0xc0008d6bb0) Data frame received for 5\nI0205 22:04:03.449223 2625 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0205 22:04:03.449248 2625 log.go:172] (0xc0008e6000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31724\nI0205 22:04:03.453997 2625 log.go:172] (0xc0008d6bb0) Data frame received for 5\nI0205 22:04:03.454036 2625 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0205 22:04:03.454062 2625 log.go:172] (0xc0008e6000) (5) Data frame sent\nConnection to 10.96.1.234 31724 port [tcp/31724] succeeded!\nI0205 22:04:03.562449 2625 log.go:172] (0xc0008d6bb0) Data frame received for 1\nI0205 22:04:03.562796 2625 log.go:172] (0xc0008d6bb0) (0xc0008e6000) Stream removed, broadcasting: 5\nI0205 22:04:03.562869 2625 log.go:172] (0xc0008e63c0) (1) Data frame handling\nI0205 22:04:03.562908 2625 log.go:172] (0xc0008e63c0) (1) Data frame sent\nI0205 22:04:03.562932 2625 log.go:172] (0xc0008d6bb0) (0xc00057c640) Stream removed, broadcasting: 3\nI0205 22:04:03.562960 2625 log.go:172] (0xc0008d6bb0) (0xc0008e63c0) Stream removed, broadcasting: 1\nI0205 22:04:03.563004 2625 log.go:172] (0xc0008d6bb0) Go away received\nI0205 22:04:03.564045 2625 log.go:172] (0xc0008d6bb0) (0xc0008e63c0) Stream removed, broadcasting: 1\nI0205 22:04:03.564084 2625 log.go:172] (0xc0008d6bb0) (0xc00057c640) Stream removed, broadcasting: 3\nI0205 22:04:03.564089 2625 log.go:172] (0xc0008d6bb0) (0xc0008e6000) Stream removed, broadcasting: 5\n" Feb 5 22:04:03.574: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:04:03.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2564" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.829 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":148,"skipped":2268,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:04:03.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 22:04:03.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1" in namespace "downward-api-2945" to be "success or failure" Feb 5 22:04:03.712: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435387ms Feb 5 22:04:05.718: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012520904s Feb 5 22:04:07.726: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020458453s Feb 5 22:04:09.768: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062464103s Feb 5 22:04:12.769: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.063471952s STEP: Saw pod success Feb 5 22:04:12.769: INFO: Pod "downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1" satisfied condition "success or failure" Feb 5 22:04:12.802: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1 container client-container: STEP: delete the pod Feb 5 22:04:13.285: INFO: Waiting for pod downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1 to disappear Feb 5 22:04:13.324: INFO: Pod downwardapi-volume-6f6bd845-f835-4405-bbf7-1843320963b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:04:13.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2945" for this suite. • [SLOW TEST:9.757 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2282,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:04:13.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f68e58db-6633-4df9-bc4f-dbd6d0ecbe93 STEP: Creating a pod to test consume configMaps Feb 5 22:04:13.764: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564" in namespace "projected-6151" to be "success or failure" Feb 5 22:04:14.006: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 242.18144ms Feb 5 22:04:16.017: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25248883s Feb 5 22:04:18.027: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 4.263149974s Feb 5 22:04:20.032: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 6.267543591s Feb 5 22:04:22.085: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 8.320835386s Feb 5 22:04:24.092: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Pending", Reason="", readiness=false. Elapsed: 10.32751582s Feb 5 22:04:26.246: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.481539714s STEP: Saw pod success Feb 5 22:04:26.246: INFO: Pod "pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564" satisfied condition "success or failure" Feb 5 22:04:26.254: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564 container projected-configmap-volume-test: STEP: delete the pod Feb 5 22:04:26.360: INFO: Waiting for pod pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564 to disappear Feb 5 22:04:26.365: INFO: Pod pod-projected-configmaps-909f848a-ede9-425f-807b-290477795564 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:04:26.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6151" for this suite. • [SLOW TEST:13.031 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2292,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:04:26.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:04:34.620: INFO: Waiting up to 5m0s for pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7" in namespace "pods-6031" to be "success or failure" Feb 5 22:04:34.633: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.893606ms Feb 5 22:04:36.639: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018217185s Feb 5 22:04:38.646: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025089024s Feb 5 22:04:40.660: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039283497s Feb 5 22:04:42.671: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050511331s STEP: Saw pod success Feb 5 22:04:42.672: INFO: Pod "client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7" satisfied condition "success or failure" Feb 5 22:04:42.676: INFO: Trying to get logs from node jerma-node pod client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7 container env3cont: STEP: delete the pod Feb 5 22:04:42.740: INFO: Waiting for pod client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7 to disappear Feb 5 22:04:42.746: INFO: Pod client-envvars-f7b4311f-a549-4f92-976a-1688d3afd3c7 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:04:42.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6031" for this suite. • [SLOW TEST:16.385 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2293,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:04:42.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-2228673a-ec93-45f5-9ad4-d8adbfdc2898 STEP: Creating secret with name s-test-opt-upd-ced42367-b152-4e41-bf27-e516e731ac38 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2228673a-ec93-45f5-9ad4-d8adbfdc2898 STEP: Updating secret s-test-opt-upd-ced42367-b152-4e41-bf27-e516e731ac38 STEP: Creating secret with name s-test-opt-create-fe61fe55-4518-4584-a537-78cf7b0a1894 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:04:59.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9705" for this suite. • [SLOW TEST:16.408 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2301,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:04:59.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-bd909616-e760-4f1e-8850-0fad9cd6f838 STEP: Creating a pod to test consume configMaps Feb 5 22:04:59.313: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3" in namespace "projected-5493" to be "success or failure" Feb 5 22:04:59.325: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.325905ms Feb 5 22:05:01.382: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068765697s Feb 5 22:05:03.392: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078945963s Feb 5 22:05:05.758: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444761915s Feb 5 22:05:07.773: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.460055159s STEP: Saw pod success Feb 5 22:05:07.774: INFO: Pod "pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3" satisfied condition "success or failure" Feb 5 22:05:07.778: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3 container projected-configmap-volume-test: STEP: delete the pod Feb 5 22:05:07.966: INFO: Waiting for pod pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3 to disappear Feb 5 22:05:07.989: INFO: Pod pod-projected-configmaps-0c2ff4b4-c49f-4dd2-8a1d-b8abb88713a3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:07.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5493" for this suite. • [SLOW TEST:8.829 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2310,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:07.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 5 22:05:08.230: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2257 /api/v1/namespaces/watch-2257/configmaps/e2e-watch-test-watch-closed 4825cd3f-09a5-454d-991d-b4448c62c3f7 6616025 0 2020-02-05 22:05:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 5 22:05:08.231: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2257 /api/v1/namespaces/watch-2257/configmaps/e2e-watch-test-watch-closed 4825cd3f-09a5-454d-991d-b4448c62c3f7 6616026 0 2020-02-05 22:05:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 5 22:05:08.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2257 /api/v1/namespaces/watch-2257/configmaps/e2e-watch-test-watch-closed 4825cd3f-09a5-454d-991d-b4448c62c3f7 6616027 0 2020-02-05 22:05:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 5 22:05:08.311: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2257 /api/v1/namespaces/watch-2257/configmaps/e2e-watch-test-watch-closed 4825cd3f-09a5-454d-991d-b4448c62c3f7 6616028 0 2020-02-05 22:05:08 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:08.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2257" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":154,"skipped":2311,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:08.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6shns in namespace proxy-1290 I0205 22:05:08.681043 9 runners.go:189] Created replication controller with name: proxy-service-6shns, namespace: proxy-1290, replica count: 1 I0205 22:05:09.732006 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:10.732788 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:11.733327 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:12.733713 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:13.734185 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:14.734640 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:15.735014 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:05:16.735385 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0205 22:05:17.735806 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0205 22:05:18.736128 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0205 22:05:19.736790 9 runners.go:189] proxy-service-6shns Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 5 22:05:19.746: INFO: setup took 11.259712656s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 5 22:05:19.786: INFO: (0) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 39.754876ms) Feb 5 22:05:19.786: INFO: (0) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 40.313936ms) Feb 5 22:05:19.787: INFO: (0) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 40.32192ms) Feb 5 22:05:19.787: INFO: (0) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 41.248152ms) Feb 5 22:05:19.788: INFO: (0) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 41.449166ms) Feb 5 22:05:19.788: INFO: (0) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 41.810785ms) Feb 5 22:05:19.789: INFO: (0) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 42.904434ms) Feb 5 22:05:19.789: INFO: (0) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 43.156427ms) Feb 5 22:05:19.790: INFO: (0) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 43.456313ms) Feb 5 22:05:19.790: INFO: (0) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 43.642705ms) Feb 5 22:05:19.796: INFO: (0) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 49.586246ms) Feb 5 22:05:19.796: INFO: (0) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 49.94773ms) Feb 5 22:05:19.798: INFO: (0) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 51.367434ms) Feb 5 22:05:19.805: INFO: (0) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 58.460866ms) Feb 5 22:05:19.805: INFO: (0) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test (200; 21.562825ms) Feb 5 22:05:19.827: INFO: (1) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 22.343587ms) Feb 5 22:05:19.827: INFO: (1) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 22.230547ms) Feb 5 22:05:19.828: INFO: (1) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 23.003589ms) Feb 5 22:05:19.828: INFO: (1) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 23.495768ms) Feb 5 22:05:19.833: INFO: (1) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 27.70953ms) Feb 5 22:05:19.833: INFO: (1) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 27.949436ms) Feb 5 22:05:19.834: INFO: (1) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 28.74887ms) Feb 5 22:05:19.834: INFO: (1) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 29.287897ms) Feb 5 22:05:19.834: INFO: (1) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 28.869935ms) Feb 5 22:05:19.835: INFO: (1) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 9.045176ms) Feb 5 22:05:19.846: INFO: (2) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 10.628504ms) Feb 5 22:05:19.848: INFO: (2) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 12.789033ms) Feb 5 22:05:19.849: INFO: (2) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 13.115338ms) Feb 5 22:05:19.849: INFO: (2) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 13.412529ms) Feb 5 22:05:19.849: INFO: (2) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 13.39898ms) Feb 5 22:05:19.850: INFO: (2) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 14.778624ms) Feb 5 22:05:19.851: INFO: (2) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 14.984219ms) Feb 5 22:05:19.851: INFO: (2) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 15.269268ms) Feb 5 22:05:19.853: INFO: (2) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 17.486631ms) Feb 5 22:05:19.853: INFO: (2) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 17.692651ms) Feb 5 22:05:19.853: INFO: (2) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 17.513479ms) Feb 5 22:05:19.853: INFO: (2) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 17.472787ms) Feb 5 22:05:19.853: INFO: (2) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 17.459534ms) Feb 5 22:05:19.865: INFO: (3) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 11.910205ms) Feb 5 22:05:19.865: INFO: (3) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 11.934367ms) Feb 5 22:05:19.873: INFO: (3) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 19.665118ms) Feb 5 22:05:19.874: INFO: (3) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 19.904223ms) Feb 5 22:05:19.874: INFO: (3) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 20.200408ms) Feb 5 22:05:19.874: INFO: (3) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 20.417827ms) Feb 5 22:05:19.874: INFO: (3) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 30.046068ms) Feb 5 22:05:19.884: INFO: (3) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 30.536454ms) Feb 5 22:05:19.884: INFO: (3) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 30.60312ms) Feb 5 22:05:19.885: INFO: (3) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 30.976422ms) Feb 5 22:05:19.891: INFO: (3) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 37.806429ms) Feb 5 22:05:19.891: INFO: (3) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 37.50765ms) Feb 5 22:05:19.891: INFO: (3) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 37.888603ms) Feb 5 22:05:19.891: INFO: (3) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 37.506178ms) Feb 5 22:05:19.891: INFO: (3) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 37.570954ms) Feb 5 22:05:19.904: INFO: (4) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 12.236715ms) Feb 5 22:05:19.904: INFO: (4) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 13.040183ms) Feb 5 22:05:19.906: INFO: (4) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 14.059658ms) Feb 5 22:05:19.906: INFO: (4) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 13.58908ms) Feb 5 22:05:19.906: INFO: (4) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 13.69974ms) Feb 5 22:05:19.906: INFO: (4) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 13.694323ms) Feb 5 22:05:19.907: INFO: (4) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 14.432664ms) Feb 5 22:05:19.907: INFO: (4) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 14.529329ms) Feb 5 22:05:19.907: INFO: (4) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 14.507565ms) Feb 5 22:05:19.907: INFO: (4) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 15.393889ms) Feb 5 22:05:19.909: INFO: (4) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 17.355919ms) Feb 5 22:05:19.910: INFO: (4) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 17.804118ms) Feb 5 22:05:19.910: INFO: (4) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 17.862073ms) Feb 5 22:05:19.911: INFO: (4) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 18.352867ms) Feb 5 22:05:19.914: INFO: (5) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 3.481881ms) Feb 5 22:05:19.923: INFO: (5) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 11.818817ms) Feb 5 22:05:19.923: INFO: (5) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 11.949068ms) Feb 5 22:05:19.923: INFO: (5) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 11.913392ms) Feb 5 22:05:19.923: INFO: (5) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 12.423106ms) Feb 5 22:05:19.923: INFO: (5) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 12.184034ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 14.163819ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 14.238912ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 14.471393ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 14.571455ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 14.501841ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 14.58298ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 14.470214ms) Feb 5 22:05:19.925: INFO: (5) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 14.654961ms) Feb 5 22:05:19.926: INFO: (5) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 14.54237ms) Feb 5 22:05:19.926: INFO: (5) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 6.240155ms) Feb 5 22:05:19.932: INFO: (6) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 6.415223ms) Feb 5 22:05:19.934: INFO: (6) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 7.830403ms) Feb 5 22:05:19.935: INFO: (6) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 8.501874ms) Feb 5 22:05:19.935: INFO: (6) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 8.823294ms) Feb 5 22:05:19.935: INFO: (6) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 7.725656ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 7.717441ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 7.816268ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 7.854343ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 7.782526ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 8.249931ms) Feb 5 22:05:19.950: INFO: (7) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 8.237845ms) Feb 5 22:05:19.951: INFO: (7) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 9.430108ms) Feb 5 22:05:19.952: INFO: (7) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 9.47868ms) Feb 5 22:05:19.952: INFO: (7) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 10.302259ms) Feb 5 22:05:19.953: INFO: (7) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 10.974465ms) Feb 5 22:05:19.954: INFO: (7) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 12.302164ms) Feb 5 22:05:19.954: INFO: (7) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 12.141797ms) Feb 5 22:05:19.955: INFO: (7) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 13.155298ms) Feb 5 22:05:19.955: INFO: (7) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 13.261996ms) Feb 5 22:05:19.962: INFO: (8) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 6.102587ms) Feb 5 22:05:19.964: INFO: (8) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 7.433222ms) Feb 5 22:05:19.965: INFO: (8) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 8.798961ms) Feb 5 22:05:19.965: INFO: (8) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 8.590853ms) Feb 5 22:05:19.965: INFO: (8) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 8.676842ms) Feb 5 22:05:19.965: INFO: (8) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 8.752073ms) Feb 5 22:05:19.965: INFO: (8) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 9.044802ms) Feb 5 22:05:19.968: INFO: (8) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 11.59885ms) Feb 5 22:05:19.970: INFO: (8) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 13.691749ms) Feb 5 22:05:19.970: INFO: (8) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 14.043906ms) Feb 5 22:05:19.970: INFO: (8) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 13.935226ms) Feb 5 22:05:19.970: INFO: (8) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 14.061712ms) Feb 5 22:05:19.970: INFO: (8) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 14.099443ms) Feb 5 22:05:19.971: INFO: (8) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 14.696838ms) Feb 5 22:05:19.979: INFO: (9) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 7.595005ms) Feb 5 22:05:19.979: INFO: (9) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 7.603286ms) Feb 5 22:05:19.981: INFO: (9) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 9.558009ms) Feb 5 22:05:19.982: INFO: (9) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 10.606254ms) Feb 5 22:05:19.983: INFO: (9) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 11.328798ms) Feb 5 22:05:19.983: INFO: (9) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 11.746914ms) Feb 5 22:05:19.983: INFO: (9) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 12.00865ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.745852ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 12.819319ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 12.854791ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.788713ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 12.856082ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 13.001895ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 13.139601ms) Feb 5 22:05:19.984: INFO: (9) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 13.131628ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 11.956171ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 12.080144ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 12.137121ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test (200; 12.202466ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 12.121342ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 12.229252ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 12.234723ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.195958ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 12.182434ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 12.222592ms) Feb 5 22:05:19.997: INFO: (10) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 12.230408ms) Feb 5 22:05:20.000: INFO: (10) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 15.02322ms) Feb 5 22:05:20.000: INFO: (10) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 14.965802ms) Feb 5 22:05:20.000: INFO: (10) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 15.124062ms) Feb 5 22:05:20.000: INFO: (10) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 15.10335ms) Feb 5 22:05:20.008: INFO: (11) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 7.489ms) Feb 5 22:05:20.009: INFO: (11) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 9.196153ms) Feb 5 22:05:20.009: INFO: (11) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 8.894149ms) Feb 5 22:05:20.010: INFO: (11) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 9.691351ms) Feb 5 22:05:20.011: INFO: (11) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 10.443965ms) Feb 5 22:05:20.011: INFO: (11) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 10.975047ms) Feb 5 22:05:20.012: INFO: (11) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 10.926749ms) Feb 5 22:05:20.012: INFO: (11) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 11.398617ms) Feb 5 22:05:20.013: INFO: (11) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 12.402057ms) Feb 5 22:05:20.013: INFO: (11) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 12.785461ms) Feb 5 22:05:20.013: INFO: (11) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.082708ms) Feb 5 22:05:20.013: INFO: (11) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 12.50275ms) Feb 5 22:05:20.014: INFO: (11) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 13.252137ms) Feb 5 22:05:20.014: INFO: (11) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 13.678921ms) Feb 5 22:05:20.015: INFO: (11) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 14.65923ms) Feb 5 22:05:20.023: INFO: (12) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 7.804142ms) Feb 5 22:05:20.024: INFO: (12) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 8.278258ms) Feb 5 22:05:20.025: INFO: (12) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 9.525238ms) Feb 5 22:05:20.025: INFO: (12) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 9.960045ms) Feb 5 22:05:20.026: INFO: (12) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 10.658974ms) Feb 5 22:05:20.026: INFO: (12) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 10.922352ms) Feb 5 22:05:20.026: INFO: (12) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 11.058076ms) Feb 5 22:05:20.027: INFO: (12) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 11.339951ms) Feb 5 22:05:20.027: INFO: (12) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 11.962899ms) Feb 5 22:05:20.027: INFO: (12) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 12.08927ms) Feb 5 22:05:20.031: INFO: (12) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 15.538099ms) Feb 5 22:05:20.031: INFO: (12) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 15.64566ms) Feb 5 22:05:20.031: INFO: (12) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 15.6692ms) Feb 5 22:05:20.031: INFO: (12) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 15.727031ms) Feb 5 22:05:20.031: INFO: (12) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 15.990579ms) Feb 5 22:05:20.037: INFO: (13) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 5.806352ms) Feb 5 22:05:20.037: INFO: (13) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 5.947944ms) Feb 5 22:05:20.040: INFO: (13) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 8.080581ms) Feb 5 22:05:20.040: INFO: (13) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 7.517875ms) Feb 5 22:05:20.040: INFO: (13) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 8.362279ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 9.701187ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 8.401679ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 9.254349ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 8.385741ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 9.465066ms) Feb 5 22:05:20.041: INFO: (13) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 8.582994ms) Feb 5 22:05:20.042: INFO: (13) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 9.469528ms) Feb 5 22:05:20.042: INFO: (13) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 10.531431ms) Feb 5 22:05:20.043: INFO: (13) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 10.627777ms) Feb 5 22:05:20.047: INFO: (14) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 4.101285ms) Feb 5 22:05:20.048: INFO: (14) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 5.571415ms) Feb 5 22:05:20.051: INFO: (14) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 8.495537ms) Feb 5 22:05:20.051: INFO: (14) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 8.646884ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 20.185748ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 20.585862ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 20.383472ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 20.351026ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 20.58737ms) Feb 5 22:05:20.063: INFO: (14) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 20.631272ms) Feb 5 22:05:20.064: INFO: (14) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 21.745095ms) Feb 5 22:05:20.065: INFO: (14) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 21.755398ms) Feb 5 22:05:20.065: INFO: (14) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 22.184418ms) Feb 5 22:05:20.065: INFO: (14) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 22.338511ms) Feb 5 22:05:20.066: INFO: (14) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 23.36042ms) Feb 5 22:05:20.072: INFO: (15) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 5.481702ms) Feb 5 22:05:20.072: INFO: (15) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 5.955758ms) Feb 5 22:05:20.077: INFO: (15) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 10.248537ms) Feb 5 22:05:20.077: INFO: (15) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 10.783504ms) Feb 5 22:05:20.077: INFO: (15) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 10.838097ms) Feb 5 22:05:20.077: INFO: (15) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 11.024611ms) Feb 5 22:05:20.078: INFO: (15) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 11.86211ms) Feb 5 22:05:20.078: INFO: (15) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 12.03009ms) Feb 5 22:05:20.079: INFO: (15) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 12.19924ms) Feb 5 22:05:20.080: INFO: (15) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 13.622793ms) Feb 5 22:05:20.080: INFO: (15) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 13.441494ms) Feb 5 22:05:20.080: INFO: (15) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 13.498641ms) Feb 5 22:05:20.080: INFO: (15) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 13.889642ms) Feb 5 22:05:20.080: INFO: (15) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 14.203211ms) Feb 5 22:05:20.085: INFO: (16) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 4.722714ms) Feb 5 22:05:20.087: INFO: (16) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 5.893167ms) Feb 5 22:05:20.087: INFO: (16) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 6.621736ms) Feb 5 22:05:20.087: INFO: (16) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 6.69219ms) Feb 5 22:05:20.088: INFO: (16) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 6.868738ms) Feb 5 22:05:20.088: INFO: (16) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 6.975136ms) Feb 5 22:05:20.090: INFO: (16) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 9.721121ms) Feb 5 22:05:20.093: INFO: (16) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 12.069294ms) Feb 5 22:05:20.093: INFO: (16) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.404115ms) Feb 5 22:05:20.093: INFO: (16) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test (200; 12.138392ms) Feb 5 22:05:20.107: INFO: (17) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.29997ms) Feb 5 22:05:20.107: INFO: (17) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 12.427864ms) Feb 5 22:05:20.107: INFO: (17) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 12.673567ms) Feb 5 22:05:20.108: INFO: (17) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 13.097902ms) Feb 5 22:05:20.108: INFO: (17) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test<... (200; 12.32759ms) Feb 5 22:05:20.143: INFO: (18) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: test (200; 12.376777ms) Feb 5 22:05:20.143: INFO: (18) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 12.333191ms) Feb 5 22:05:20.143: INFO: (18) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:1080/proxy/: ... (200; 12.395843ms) Feb 5 22:05:20.143: INFO: (18) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 12.946377ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 15.068856ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 14.448521ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname2/proxy/: tls qux (200; 14.496612ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 14.676859ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 14.688311ms) Feb 5 22:05:20.145: INFO: (18) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 14.772178ms) Feb 5 22:05:20.151: INFO: (19) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:160/proxy/: foo (200; 5.269257ms) Feb 5 22:05:20.151: INFO: (19) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:462/proxy/: tls qux (200; 5.549043ms) Feb 5 22:05:20.152: INFO: (19) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:1080/proxy/: test<... (200; 6.804307ms) Feb 5 22:05:20.153: INFO: (19) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd/proxy/: test (200; 7.88687ms) Feb 5 22:05:20.154: INFO: (19) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:160/proxy/: foo (200; 8.151659ms) Feb 5 22:05:20.154: INFO: (19) /api/v1/namespaces/proxy-1290/pods/proxy-service-6shns-5k8bd:162/proxy/: bar (200; 8.525526ms) Feb 5 22:05:20.154: INFO: (19) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname2/proxy/: bar (200; 8.627256ms) Feb 5 22:05:20.155: INFO: (19) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:443/proxy/: ... (200; 9.224315ms) Feb 5 22:05:20.155: INFO: (19) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname2/proxy/: bar (200; 9.406662ms) Feb 5 22:05:20.155: INFO: (19) /api/v1/namespaces/proxy-1290/services/http:proxy-service-6shns:portname1/proxy/: foo (200; 9.662194ms) Feb 5 22:05:20.155: INFO: (19) /api/v1/namespaces/proxy-1290/services/proxy-service-6shns:portname1/proxy/: foo (200; 9.926043ms) Feb 5 22:05:20.156: INFO: (19) /api/v1/namespaces/proxy-1290/pods/https:proxy-service-6shns-5k8bd:460/proxy/: tls baz (200; 9.923433ms) Feb 5 22:05:20.156: INFO: (19) /api/v1/namespaces/proxy-1290/services/https:proxy-service-6shns:tlsportname1/proxy/: tls baz (200; 10.085794ms) Feb 5 22:05:20.156: INFO: (19) /api/v1/namespaces/proxy-1290/pods/http:proxy-service-6shns-5k8bd:162/proxy/: bar (200; 10.070897ms) STEP: deleting ReplicationController proxy-service-6shns in namespace proxy-1290, will wait for the garbage collector to delete the pods Feb 5 22:05:20.214: INFO: Deleting ReplicationController proxy-service-6shns took: 5.711556ms Feb 5 22:05:20.515: INFO: Terminating ReplicationController proxy-service-6shns pods took: 300.697077ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:32.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1290" for this suite. • [SLOW TEST:24.192 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":155,"skipped":2333,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:32.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ac9aaab8-802b-4c45-ad2e-5e0c265b8197 STEP: Creating a pod to test consume configMaps Feb 5 22:05:32.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2" in namespace "configmap-1319" to be "success or failure" Feb 5 22:05:32.678: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.223569ms Feb 5 22:05:34.684: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025070626s Feb 5 22:05:36.692: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033277002s Feb 5 22:05:38.703: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044416933s Feb 5 22:05:40.710: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051433909s STEP: Saw pod success Feb 5 22:05:40.711: INFO: Pod "pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2" satisfied condition "success or failure" Feb 5 22:05:40.715: INFO: Trying to get logs from node jerma-node pod pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2 container configmap-volume-test: STEP: delete the pod Feb 5 22:05:40.787: INFO: Waiting for pod pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2 to disappear Feb 5 22:05:40.865: INFO: Pod pod-configmaps-31b22c4a-eea3-4a46-9aa4-e40f9e9600f2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:40.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1319" for this suite. • [SLOW TEST:8.354 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2346,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:40.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 5 22:05:41.107: INFO: Waiting up to 5m0s for pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869" in namespace "emptydir-1236" to be "success or failure" Feb 5 22:05:41.130: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869": Phase="Pending", Reason="", readiness=false. Elapsed: 22.600946ms Feb 5 22:05:43.140: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031962895s Feb 5 22:05:45.144: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036569233s Feb 5 22:05:47.150: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042754158s Feb 5 22:05:49.158: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050231302s STEP: Saw pod success Feb 5 22:05:49.158: INFO: Pod "pod-b99a5df3-cc65-4902-9fb5-95a4731c8869" satisfied condition "success or failure" Feb 5 22:05:49.162: INFO: Trying to get logs from node jerma-node pod pod-b99a5df3-cc65-4902-9fb5-95a4731c8869 container test-container: STEP: delete the pod Feb 5 22:05:49.257: INFO: Waiting for pod pod-b99a5df3-cc65-4902-9fb5-95a4731c8869 to disappear Feb 5 22:05:49.273: INFO: Pod pod-b99a5df3-cc65-4902-9fb5-95a4731c8869 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:49.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1236" for this suite. • [SLOW TEST:8.405 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2350,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:49.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 5 22:05:56.510: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:05:56.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8073" for this suite. • [SLOW TEST:7.505 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2358,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:05:56.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Feb 5 22:06:05.631: INFO: Successfully updated pod "annotationupdate95c85a82-4633-4cdb-ae04-8e992252cb87" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:06:07.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3993" for this suite. • [SLOW TEST:10.958 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:06:07.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Feb 5 22:06:07.886: INFO: Waiting up to 5m0s for pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3" in namespace "var-expansion-5818" to be "success or failure" Feb 5 22:06:07.900: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.94421ms Feb 5 22:06:09.906: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019785215s Feb 5 22:06:11.914: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028269394s Feb 5 22:06:13.922: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035869282s Feb 5 22:06:15.927: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040891143s STEP: Saw pod success Feb 5 22:06:15.927: INFO: Pod "var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3" satisfied condition "success or failure" Feb 5 22:06:15.929: INFO: Trying to get logs from node jerma-node pod var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3 container dapi-container: STEP: delete the pod Feb 5 22:06:16.006: INFO: Waiting for pod var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3 to disappear Feb 5 22:06:16.011: INFO: Pod var-expansion-2a0e818f-9477-407e-a1b4-a8a8f5b534e3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:06:16.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5818" for this suite. • [SLOW TEST:8.268 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2397,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:06:16.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:06:16.269: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 5 22:06:16.399: INFO: Number of nodes with available pods: 0 Feb 5 22:06:16.399: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:18.339: INFO: Number of nodes with available pods: 0 Feb 5 22:06:18.340: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:18.789: INFO: Number of nodes with available pods: 0 Feb 5 22:06:18.790: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:19.624: INFO: Number of nodes with available pods: 0 Feb 5 22:06:19.624: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:20.687: INFO: Number of nodes with available pods: 0 Feb 5 22:06:20.687: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:21.419: INFO: Number of nodes with available pods: 0 Feb 5 22:06:21.420: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:23.794: INFO: Number of nodes with available pods: 0 Feb 5 22:06:23.794: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:24.773: INFO: Number of nodes with available pods: 0 Feb 5 22:06:24.773: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:25.410: INFO: Number of nodes with available pods: 0 Feb 5 22:06:25.410: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:06:27.012: INFO: Number of nodes with available pods: 1 Feb 5 22:06:27.012: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 5 22:06:27.880: INFO: Number of nodes with available pods: 2 Feb 5 22:06:27.880: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 5 22:06:28.119: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:28.120: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:29.147: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:29.147: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:30.184: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:30.184: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:31.148: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:31.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:32.152: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:32.153: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:32.153: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:33.151: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:33.151: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:33.151: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:34.148: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:34.148: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:34.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:35.146: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:35.146: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:35.146: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:36.149: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:36.149: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:36.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:37.148: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:37.148: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:37.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:38.149: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:38.149: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:38.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:39.149: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:39.149: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:39.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:40.147: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:40.148: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:40.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:41.150: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:41.150: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:41.150: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:42.156: INFO: Wrong image for pod: daemon-set-rvgvr. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:42.156: INFO: Pod daemon-set-rvgvr is not available Feb 5 22:06:42.156: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:43.148: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:43.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:44.151: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:44.151: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:45.185: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:45.185: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:46.148: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:46.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:48.082: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:48.082: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:48.518: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:48.518: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:49.149: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:49.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:50.290: INFO: Pod daemon-set-rrpz8 is not available Feb 5 22:06:50.291: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:51.146: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:52.147: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:53.146: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:54.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:55.147: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:55.147: INFO: Pod daemon-set-twbsq is not available Feb 5 22:06:56.152: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:56.152: INFO: Pod daemon-set-twbsq is not available Feb 5 22:06:57.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:57.148: INFO: Pod daemon-set-twbsq is not available Feb 5 22:06:58.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:58.148: INFO: Pod daemon-set-twbsq is not available Feb 5 22:06:59.149: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:06:59.149: INFO: Pod daemon-set-twbsq is not available Feb 5 22:07:00.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:07:00.148: INFO: Pod daemon-set-twbsq is not available Feb 5 22:07:01.148: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:07:01.149: INFO: Pod daemon-set-twbsq is not available Feb 5 22:07:02.153: INFO: Wrong image for pod: daemon-set-twbsq. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Feb 5 22:07:02.153: INFO: Pod daemon-set-twbsq is not available Feb 5 22:07:03.148: INFO: Pod daemon-set-wkh89 is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 5 22:07:03.163: INFO: Number of nodes with available pods: 1 Feb 5 22:07:03.163: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:04.179: INFO: Number of nodes with available pods: 1 Feb 5 22:07:04.179: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:05.172: INFO: Number of nodes with available pods: 1 Feb 5 22:07:05.172: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:06.172: INFO: Number of nodes with available pods: 1 Feb 5 22:07:06.173: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:07.172: INFO: Number of nodes with available pods: 1 Feb 5 22:07:07.172: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:08.172: INFO: Number of nodes with available pods: 1 Feb 5 22:07:08.172: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:09.210: INFO: Number of nodes with available pods: 1 Feb 5 22:07:09.210: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:07:10.175: INFO: Number of nodes with available pods: 2 Feb 5 22:07:10.175: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6815, will wait for the garbage collector to delete the pods Feb 5 22:07:10.265: INFO: Deleting DaemonSet.extensions daemon-set took: 8.97259ms Feb 5 22:07:10.565: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.558159ms Feb 5 22:07:23.180: INFO: Number of nodes with available pods: 0 Feb 5 22:07:23.180: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 22:07:23.196: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6815/daemonsets","resourceVersion":"6616587"},"items":null} Feb 5 22:07:23.207: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6815/pods","resourceVersion":"6616587"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:07:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6815" for this suite. • [SLOW TEST:67.323 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":161,"skipped":2406,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:07:23.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-eb2a1e5a-8c7b-465f-8bb9-ccbab0c7a5b6 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-eb2a1e5a-8c7b-465f-8bb9-ccbab0c7a5b6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:08:34.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2846" for this suite. • [SLOW TEST:71.454 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2412,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:08:34.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:08:43.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7921" for this suite. • [SLOW TEST:8.304 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:08:43.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:08:43.203: INFO: Creating deployment "webserver-deployment" Feb 5 22:08:43.211: INFO: Waiting for observed generation 1 Feb 5 22:08:46.113: INFO: Waiting for all required pods to come up Feb 5 22:08:46.157: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 5 22:09:12.851: INFO: Waiting for deployment "webserver-deployment" to complete Feb 5 22:09:12.870: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 5 22:09:12.885: INFO: Updating deployment webserver-deployment Feb 5 22:09:12.885: INFO: Waiting for observed generation 2 Feb 5 22:09:15.260: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 5 22:09:15.764: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 5 22:09:15.778: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 5 22:09:15.967: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 5 22:09:15.967: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 5 22:09:15.973: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 5 22:09:15.979: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 5 22:09:15.979: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 5 22:09:15.988: INFO: Updating deployment webserver-deployment Feb 5 22:09:15.988: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 5 22:09:16.644: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 5 22:09:19.186: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Feb 5 22:09:22.616: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8311 /apis/apps/v1/namespaces/deployment-8311/deployments/webserver-deployment 90c6039d-f593-416d-9639-cff689430406 6617160 3 2020-02-05 22:08:43 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00389a6e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-05 22:09:16 +0000 UTC,LastTransitionTime:2020-02-05 22:09:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-05 22:09:19 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 5 22:09:23.669: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8311 /apis/apps/v1/namespaces/deployment-8311/replicasets/webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 6617156 3 2020-02-05 22:09:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 90c6039d-f593-416d-9639-cff689430406 0xc0050a04f7 0xc0050a04f8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050a0568 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 5 22:09:23.669: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 5 22:09:23.670: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8311 /apis/apps/v1/namespaces/deployment-8311/replicasets/webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 6617155 3 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 90c6039d-f593-416d-9639-cff689430406 0xc0050a0437 0xc0050a0438}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050a0498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 5 22:09:25.614: INFO: Pod "webserver-deployment-595b5b9587-4p8ss" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4p8ss webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-4p8ss 07ef6fb6-390d-4687-93eb-4ce4646c7c03 6617010 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a0a07 0xc0050a0a08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3bfba0c2799020dbbcccaf4b97da20ad9a5eaf366ea2aa0f5f24905455817798,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.614: INFO: Pod "webserver-deployment-595b5b9587-6b5f5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6b5f5 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-6b5f5 ff7b6b8c-3967-484d-b06c-8ac624bab24a 6617137 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a0b80 0xc0050a0b81}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.614: INFO: Pod "webserver-deployment-595b5b9587-7rbv9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7rbv9 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-7rbv9 368bc8ad-2a2b-4ea7-baea-980bc4bfcc4b 6617133 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a0c97 0xc0050a0c98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.614: INFO: Pod "webserver-deployment-595b5b9587-7sw74" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7sw74 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-7sw74 8675e368-c385-438a-9163-d23732f40e3a 6617163 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a0da7 0xc0050a0da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.615: INFO: Pod "webserver-deployment-595b5b9587-7xrgt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7xrgt webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-7xrgt 962a4a95-880e-43ff-a705-a14c73b68ca5 6617001 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a0ef7 0xc0050a0ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0a002e00ad49ba2037b69084ca33362226d90ae2a84a50b922702fdd3db95e20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.615: INFO: Pod "webserver-deployment-595b5b9587-9kvqz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9kvqz webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-9kvqz dba1f57e-487d-4e83-a944-0695e07c2f43 6617114 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1070 0xc0050a1071}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.615: INFO: Pod "webserver-deployment-595b5b9587-bd758" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bd758 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-bd758 5bcf1c88-17d1-444f-939a-94d599d44229 6617172 0 2020-02-05 22:09:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1187 0xc0050a1188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.615: INFO: Pod "webserver-deployment-595b5b9587-c9294" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c9294 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-c9294 abd31a0d-d235-43c0-af1a-98b2eca7c87e 6617129 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a12e7 0xc0050a12e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.615: INFO: Pod "webserver-deployment-595b5b9587-d6n47" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d6n47 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-d6n47 60415e21-eb35-4e61-be5c-2f64d8ff797e 6617007 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1407 0xc0050a1408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://608928d769caeaebaa53492b5d0e734ecdcb23c46d99b8a24256783fb141dd59,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.616: INFO: Pod "webserver-deployment-595b5b9587-dct8g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dct8g webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-dct8g fab68279-0b9b-4608-a3e6-16f5998b7dfe 6617154 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1570 0xc0050a1571}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.616: INFO: Pod "webserver-deployment-595b5b9587-dx5l9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dx5l9 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-dx5l9 83777b5e-94a1-4955-beed-dc2eaf4247d0 6617130 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a16b7 0xc0050a16b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.616: INFO: Pod "webserver-deployment-595b5b9587-hrxrp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hrxrp webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-hrxrp d4c07de0-c262-4881-8e3f-10d29b1e1de5 6617136 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a17d7 0xc0050a17d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.616: INFO: Pod "webserver-deployment-595b5b9587-ksrvt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ksrvt webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-ksrvt 24d400df-d2b0-4b0d-857c-3821e7667620 6617157 0 2020-02-05 22:09:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a18e7 0xc0050a18e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.616: INFO: Pod "webserver-deployment-595b5b9587-lsvl7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lsvl7 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-lsvl7 1015a98a-a23f-4119-a870-3dbe5d49756f 6616966 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1a57 0xc0050a1a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://93708206120eaf681730f5d5fac7c1b4cdff15f48422b3a2ba1d7866e502ec0e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-mhvfk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mhvfk webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-mhvfk d6d2269f-0cd0-431f-9d1a-3c2a7d01fc2e 6616998 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1bf0 0xc0050a1bf1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c077b857910c8aafc2cb7bd73675452cd20f26204938acc18f83ac435b31d844,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-q47p8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q47p8 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-q47p8 d45f8bf2-5de4-4063-b464-c6da85680f81 6616984 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1d60 0xc0050a1d61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://d72269cf25476be428c8f29a64bf8b906923beb91e4d147fd8bb8fff2686cad8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-qqxbt" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qqxbt webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-qqxbt 9f66f005-7657-4d0d-bd47-acc89b1ea500 6617004 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc0050a1ed0 0xc0050a1ed1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://a51c6b7ad49e4e9025a3fd1a359ac799b59b0a94304cd4c3b7d62291b5bd3694,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-w6sxj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w6sxj webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-w6sxj 5df63d9b-600b-46be-b72f-120e259c3310 6617117 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc004e86050 0xc004e86051}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-xj4bf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xj4bf webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-xj4bf a79e2614-efd5-4450-9149-e3a38bba6da7 6616973 0 2020-02-05 22:08:43 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc004e86167 0xc004e86168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-05 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:09:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://62178383b674905273c8ac54797ffea00bcd29e041d60d525363171da3d9e72c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.617: INFO: Pod "webserver-deployment-595b5b9587-xw7p5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xw7p5 webserver-deployment-595b5b9587- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-595b5b9587-xw7p5 a22c6e8e-cb3e-420d-b129-a9234372760e 6617164 0 2020-02-05 22:09:16 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e1844232-608e-4a11-b70f-cbf684956589 0xc004e862e0 0xc004e862e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.618: INFO: Pod "webserver-deployment-c7997dcc8-2sdht" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2sdht webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-2sdht aa116bdb-0352-4ba7-950f-bba54c826bcf 6617135 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86437 0xc004e86438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.618: INFO: Pod "webserver-deployment-c7997dcc8-42njs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-42njs webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-42njs 574969a7-a890-47b0-b23b-2eb33d43c298 6617046 0 2020-02-05 22:09:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86557 0xc004e86558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.618: INFO: Pod "webserver-deployment-c7997dcc8-6mcgm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6mcgm webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-6mcgm e2c302de-ed55-4475-9d29-feb881164826 6617139 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e866c7 0xc004e866c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.618: INFO: Pod "webserver-deployment-c7997dcc8-6wgt8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6wgt8 webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-6wgt8 ad4bbbe1-fb93-4eaf-8c96-68a302a65015 6617134 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86817 0xc004e86818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.618: INFO: Pod "webserver-deployment-c7997dcc8-cpv6b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cpv6b webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-cpv6b 02cd3c28-ee73-416a-9566-892aa0af7469 6617140 0 2020-02-05 22:09:16 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86957 0xc004e86958}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.619: INFO: Pod "webserver-deployment-c7997dcc8-jkxhb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jkxhb webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-jkxhb 293e6f33-51fe-4cba-9963-ea6c30258dc3 6617058 0 2020-02-05 22:09:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86ac7 0xc004e86ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.619: INFO: Pod "webserver-deployment-c7997dcc8-jw4dn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jw4dn webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-jw4dn 10455f44-3408-40f0-8806-32e790708bd8 6617127 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86c37 0xc004e86c38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.619: INFO: Pod "webserver-deployment-c7997dcc8-k4pqb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k4pqb webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-k4pqb 0438a6e4-527a-4925-a56a-93246b5897ae 6617115 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86d67 0xc004e86d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.619: INFO: Pod "webserver-deployment-c7997dcc8-kxdmz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kxdmz webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-kxdmz 121a00a8-7c35-49d4-9632-321b6f23e9dc 6617072 0 2020-02-05 22:09:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e86e97 0xc004e86e98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.619: INFO: Pod "webserver-deployment-c7997dcc8-nkjcl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nkjcl webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-nkjcl 4d72650c-cc6f-4624-9f43-428ea4220ea7 6617043 0 2020-02-05 22:09:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e87017 0xc004e87018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.620: INFO: Pod "webserver-deployment-c7997dcc8-q29mt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q29mt webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-q29mt da6a1b87-9473-48b2-8cbd-4898d838eba2 6617132 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e87197 0xc004e87198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.620: INFO: Pod "webserver-deployment-c7997dcc8-s594j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s594j webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-s594j 3ed56264-b263-40eb-8de7-90a0bc127300 6617169 0 2020-02-05 22:09:17 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e872b7 0xc004e872b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-05 22:09:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 5 22:09:25.620: INFO: Pod "webserver-deployment-c7997dcc8-snxw6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-snxw6 webserver-deployment-c7997dcc8- deployment-8311 /api/v1/namespaces/deployment-8311/pods/webserver-deployment-c7997dcc8-snxw6 6a1aadad-d964-48ec-bc9b-d1c1f0ab20c0 6617055 0 2020-02-05 22:09:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4cf33fe3-4445-4852-8da8-fe8beb3b6688 0xc004e87427 0xc004e87428}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l8tcf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l8tcf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l8tcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:09:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-05 22:09:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:09:25.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8311" for this suite. • [SLOW TEST:44.419 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":164,"skipped":2441,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:09:27.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-914 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-914 STEP: creating replication controller externalsvc in namespace services-914 I0205 22:09:29.696230 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-914, replica count: 2 I0205 22:09:32.747002 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:35.747621 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:38.749173 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:41.758816 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:44.759473 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:47.759860 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:50.760680 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:53.761624 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:56.762329 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:09:59.762940 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:02.763558 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:05.764385 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:08.765021 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:11.765566 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:14.766849 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:17.767574 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:20.768122 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:23.768819 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:26.769414 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:29.770141 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:32.771243 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:35.772123 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:38.772641 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:41.773136 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:44.773778 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:47.774536 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:50.775174 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:53.775848 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0205 22:10:56.776531 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Feb 5 22:10:56.831: INFO: Creating new exec pod Feb 5 22:11:04.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-914 execpodkx59m -- /bin/sh -x -c nslookup clusterip-service' Feb 5 22:11:05.256: INFO: stderr: "I0205 22:11:05.024389 2645 log.go:172] (0xc000608dc0) (0xc0009c65a0) Create stream\nI0205 22:11:05.024574 2645 log.go:172] (0xc000608dc0) (0xc0009c65a0) Stream added, broadcasting: 1\nI0205 22:11:05.028731 2645 log.go:172] (0xc000608dc0) Reply frame received for 1\nI0205 22:11:05.028797 2645 log.go:172] (0xc000608dc0) (0xc0008e2320) Create stream\nI0205 22:11:05.028810 2645 log.go:172] (0xc000608dc0) (0xc0008e2320) Stream added, broadcasting: 3\nI0205 22:11:05.031132 2645 log.go:172] (0xc000608dc0) Reply frame received for 3\nI0205 22:11:05.031168 2645 log.go:172] (0xc000608dc0) (0xc0009c6640) Create stream\nI0205 22:11:05.031178 2645 log.go:172] (0xc000608dc0) (0xc0009c6640) Stream added, broadcasting: 5\nI0205 22:11:05.033746 2645 log.go:172] (0xc000608dc0) Reply frame received for 5\nI0205 22:11:05.130242 2645 log.go:172] (0xc000608dc0) Data frame received for 5\nI0205 22:11:05.130305 2645 log.go:172] (0xc0009c6640) (5) Data frame handling\nI0205 22:11:05.130331 2645 log.go:172] (0xc0009c6640) (5) Data frame sent\n+ nslookup clusterip-service\nI0205 22:11:05.144633 2645 log.go:172] (0xc000608dc0) Data frame received for 3\nI0205 22:11:05.144684 2645 log.go:172] (0xc0008e2320) (3) Data frame handling\nI0205 22:11:05.144713 2645 log.go:172] (0xc0008e2320) (3) Data frame sent\nI0205 22:11:05.148360 2645 log.go:172] (0xc000608dc0) Data frame received for 3\nI0205 22:11:05.148420 2645 log.go:172] (0xc0008e2320) (3) Data frame handling\nI0205 22:11:05.148444 2645 log.go:172] (0xc0008e2320) (3) Data frame sent\nI0205 22:11:05.247741 2645 log.go:172] (0xc000608dc0) (0xc0008e2320) Stream removed, broadcasting: 3\nI0205 22:11:05.247978 2645 log.go:172] (0xc000608dc0) (0xc0009c6640) Stream removed, broadcasting: 5\nI0205 22:11:05.248002 2645 log.go:172] (0xc000608dc0) Data frame received for 1\nI0205 22:11:05.248047 2645 log.go:172] (0xc0009c65a0) (1) Data frame handling\nI0205 22:11:05.248059 2645 log.go:172] (0xc0009c65a0) (1) Data frame sent\nI0205 22:11:05.248071 2645 log.go:172] (0xc000608dc0) (0xc0009c65a0) Stream removed, broadcasting: 1\nI0205 22:11:05.248816 2645 log.go:172] (0xc000608dc0) (0xc0009c65a0) Stream removed, broadcasting: 1\nI0205 22:11:05.248848 2645 log.go:172] (0xc000608dc0) (0xc0008e2320) Stream removed, broadcasting: 3\nI0205 22:11:05.248855 2645 log.go:172] (0xc000608dc0) (0xc0009c6640) Stream removed, broadcasting: 5\nI0205 22:11:05.249057 2645 log.go:172] (0xc000608dc0) Go away received\n" Feb 5 22:11:05.256: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-914.svc.cluster.local\tcanonical name = externalsvc.services-914.svc.cluster.local.\nName:\texternalsvc.services-914.svc.cluster.local\nAddress: 10.96.124.36\n\n" STEP: deleting ReplicationController externalsvc in namespace services-914, will wait for the garbage collector to delete the pods Feb 5 22:11:05.320: INFO: Deleting ReplicationController externalsvc took: 8.301927ms Feb 5 22:11:05.720: INFO: Terminating ReplicationController externalsvc pods took: 400.52229ms Feb 5 22:11:22.546: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:11:22.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-914" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:115.128 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":165,"skipped":2446,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:11:22.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9cec7f8d-0445-46d8-a3d7-87bedb0487df STEP: Creating a pod to test consume secrets Feb 5 22:11:22.810: INFO: Waiting up to 5m0s for pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325" in namespace "secrets-3749" to be "success or failure" Feb 5 22:11:22.852: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Pending", Reason="", readiness=false. Elapsed: 42.141341ms Feb 5 22:11:24.874: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064211839s Feb 5 22:11:26.930: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119921569s Feb 5 22:11:28.935: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125049626s Feb 5 22:11:30.944: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13408583s Feb 5 22:11:33.013: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.203649678s STEP: Saw pod success Feb 5 22:11:33.014: INFO: Pod "pod-secrets-24852fe5-4d76-493c-9105-af34516a8325" satisfied condition "success or failure" Feb 5 22:11:33.020: INFO: Trying to get logs from node jerma-node pod pod-secrets-24852fe5-4d76-493c-9105-af34516a8325 container secret-volume-test: STEP: delete the pod Feb 5 22:11:33.523: INFO: Waiting for pod pod-secrets-24852fe5-4d76-493c-9105-af34516a8325 to disappear Feb 5 22:11:33.538: INFO: Pod pod-secrets-24852fe5-4d76-493c-9105-af34516a8325 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:11:33.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3749" for this suite. • [SLOW TEST:10.893 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2460,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:11:33.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-2045 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2045 to expose endpoints map[] Feb 5 22:11:33.886: INFO: successfully validated that service endpoint-test2 in namespace services-2045 exposes endpoints map[] (15.287018ms elapsed) STEP: Creating pod pod1 in namespace services-2045 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2045 to expose endpoints map[pod1:[80]] Feb 5 22:11:38.093: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.189710024s elapsed, will retry) Feb 5 22:11:42.172: INFO: successfully validated that service endpoint-test2 in namespace services-2045 exposes endpoints map[pod1:[80]] (8.267897875s elapsed) STEP: Creating pod pod2 in namespace services-2045 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2045 to expose endpoints map[pod1:[80] pod2:[80]] Feb 5 22:11:47.168: INFO: Unexpected endpoints: found map[9e0e415d-6cf9-4fc0-bdfb-5c8cc4ed13a2:[80]], expected map[pod1:[80] pod2:[80]] (4.988363782s elapsed, will retry) Feb 5 22:11:50.681: INFO: successfully validated that service endpoint-test2 in namespace services-2045 exposes endpoints map[pod1:[80] pod2:[80]] (8.501579367s elapsed) STEP: Deleting pod pod1 in namespace services-2045 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2045 to expose endpoints map[pod2:[80]] Feb 5 22:11:51.762: INFO: successfully validated that service endpoint-test2 in namespace services-2045 exposes endpoints map[pod2:[80]] (1.047111367s elapsed) STEP: Deleting pod pod2 in namespace services-2045 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2045 to expose endpoints map[] Feb 5 22:11:51.803: INFO: successfully validated that service endpoint-test2 in namespace services-2045 exposes endpoints map[] (25.93179ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:11:51.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2045" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.416 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":167,"skipped":2473,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:11:51.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 22:12:06.262: INFO: DNS probes using dns-test-00999cfe-e03e-4dd6-be6f-0efe8e920783 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 22:12:20.456: INFO: File wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:20.460: INFO: File jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:20.460: INFO: Lookups using dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 failed for: [wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local] Feb 5 22:12:25.470: INFO: File wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:25.477: INFO: File jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:25.477: INFO: Lookups using dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 failed for: [wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local] Feb 5 22:12:30.471: INFO: File wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:30.480: INFO: File jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 5 22:12:30.480: INFO: Lookups using dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 failed for: [wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local] Feb 5 22:12:35.493: INFO: File jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local from pod dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 contains '' instead of 'bar.example.com.' Feb 5 22:12:35.493: INFO: Lookups using dns-5663/dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 failed for: [jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local] Feb 5 22:12:40.476: INFO: DNS probes using dns-test-6f97bb12-2245-4ff1-8c08-60ba0db8dc30 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5663.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5663.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 5 22:12:52.763: INFO: DNS probes using dns-test-d80cd8f6-df6a-44e4-8bb8-a387edb38311 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:12:52.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5663" for this suite. • [SLOW TEST:60.871 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":168,"skipped":2477,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:12:52.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:13:53.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9600" for this suite. • [SLOW TEST:60.232 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2503,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:13:53.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Feb 5 22:13:53.217: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af" in namespace "downward-api-251" to be "success or failure" Feb 5 22:13:53.243: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af": Phase="Pending", Reason="", readiness=false. Elapsed: 26.283876ms Feb 5 22:13:55.252: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034797506s Feb 5 22:13:57.262: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045432618s Feb 5 22:13:59.399: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182215317s Feb 5 22:14:01.406: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.189520392s STEP: Saw pod success Feb 5 22:14:01.407: INFO: Pod "downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af" satisfied condition "success or failure" Feb 5 22:14:01.412: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af container client-container: STEP: delete the pod Feb 5 22:14:01.863: INFO: Waiting for pod downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af to disappear Feb 5 22:14:01.887: INFO: Pod downwardapi-volume-dd1d3df0-88a8-412f-800b-20266e4623af no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:14:01.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-251" for this suite. • [SLOW TEST:8.845 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2519,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:14:01.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:14:02.175: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 5 22:14:02.190: INFO: Number of nodes with available pods: 0 Feb 5 22:14:02.190: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 5 22:14:02.348: INFO: Number of nodes with available pods: 0 Feb 5 22:14:02.348: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:03.357: INFO: Number of nodes with available pods: 0 Feb 5 22:14:03.357: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:04.357: INFO: Number of nodes with available pods: 0 Feb 5 22:14:04.358: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:05.356: INFO: Number of nodes with available pods: 0 Feb 5 22:14:05.356: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:06.356: INFO: Number of nodes with available pods: 0 Feb 5 22:14:06.356: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:07.408: INFO: Number of nodes with available pods: 0 Feb 5 22:14:07.408: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:08.355: INFO: Number of nodes with available pods: 0 Feb 5 22:14:08.355: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:09.359: INFO: Number of nodes with available pods: 0 Feb 5 22:14:09.359: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:10.366: INFO: Number of nodes with available pods: 1 Feb 5 22:14:10.367: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 5 22:14:10.453: INFO: Number of nodes with available pods: 1 Feb 5 22:14:10.453: INFO: Number of running nodes: 0, number of available pods: 1 Feb 5 22:14:11.464: INFO: Number of nodes with available pods: 0 Feb 5 22:14:11.464: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 5 22:14:11.574: INFO: Number of nodes with available pods: 0 Feb 5 22:14:11.574: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:12.583: INFO: Number of nodes with available pods: 0 Feb 5 22:14:12.583: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:13.579: INFO: Number of nodes with available pods: 0 Feb 5 22:14:13.579: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:14.581: INFO: Number of nodes with available pods: 0 Feb 5 22:14:14.581: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:15.581: INFO: Number of nodes with available pods: 0 Feb 5 22:14:15.581: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:16.584: INFO: Number of nodes with available pods: 0 Feb 5 22:14:16.584: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:17.601: INFO: Number of nodes with available pods: 0 Feb 5 22:14:17.601: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:18.585: INFO: Number of nodes with available pods: 0 Feb 5 22:14:18.586: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:20.038: INFO: Number of nodes with available pods: 0 Feb 5 22:14:20.039: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:20.586: INFO: Number of nodes with available pods: 0 Feb 5 22:14:20.586: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:21.580: INFO: Number of nodes with available pods: 0 Feb 5 22:14:21.580: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:22.586: INFO: Number of nodes with available pods: 0 Feb 5 22:14:22.587: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:23.579: INFO: Number of nodes with available pods: 0 Feb 5 22:14:23.579: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:24.598: INFO: Number of nodes with available pods: 0 Feb 5 22:14:24.599: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:25.581: INFO: Number of nodes with available pods: 0 Feb 5 22:14:25.581: INFO: Node jerma-node is running more than one daemon pod Feb 5 22:14:26.698: INFO: Number of nodes with available pods: 1 Feb 5 22:14:26.698: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8201, will wait for the garbage collector to delete the pods Feb 5 22:14:26.877: INFO: Deleting DaemonSet.extensions daemon-set took: 81.329556ms Feb 5 22:14:27.177: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.613023ms Feb 5 22:14:33.111: INFO: Number of nodes with available pods: 0 Feb 5 22:14:33.111: INFO: Number of running nodes: 0, number of available pods: 0 Feb 5 22:14:33.120: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8201/daemonsets","resourceVersion":"6618413"},"items":null} Feb 5 22:14:33.133: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8201/pods","resourceVersion":"6618414"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:14:33.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8201" for this suite. • [SLOW TEST:31.262 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":171,"skipped":2529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:14:33.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1576 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 5 22:14:33.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1625' Feb 5 22:14:35.784: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 5 22:14:35.784: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 Feb 5 22:14:37.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1625' Feb 5 22:14:38.025: INFO: stderr: "" Feb 5 22:14:38.025: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Feb 5 22:14:38.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1625" for this suite. • [SLOW TEST:5.008 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1570 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":172,"skipped":2552,"failed":0} SSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Feb 5 22:14:38.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Feb 5 22:14:38.445: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.2337ms)
Feb  5 22:14:38.450: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.137544ms)
Feb  5 22:14:38.468: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.549173ms)
Feb  5 22:14:38.489: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 20.816271ms)
Feb  5 22:14:38.496: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.209772ms)
Feb  5 22:14:38.513: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.369539ms)
Feb  5 22:14:38.520: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.136913ms)
Feb  5 22:14:38.548: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.422483ms)
Feb  5 22:14:38.553: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.183929ms)
Feb  5 22:14:38.558: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.492127ms)
Feb  5 22:14:38.562: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.970405ms)
Feb  5 22:14:38.568: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.106658ms)
Feb  5 22:14:38.583: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.13544ms)
Feb  5 22:14:38.624: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.121521ms)
Feb  5 22:14:38.638: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.138777ms)
Feb  5 22:14:38.720: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 82.253623ms)
Feb  5 22:14:38.737: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 16.792764ms)
Feb  5 22:14:38.746: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.051443ms)
Feb  5 22:14:38.764: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.936305ms)
Feb  5 22:14:38.772: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.444394ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:14:38.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2403" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":173,"skipped":2558,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:14:38.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: set up a multi version CRD
Feb  5 22:14:38.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:14:54.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8307" for this suite.

• [SLOW TEST:15.534 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":174,"skipped":2559,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:14:54.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:14:54.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284" in namespace "downward-api-9363" to be "success or failure"
Feb  5 22:14:54.453: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Pending", Reason="", readiness=false. Elapsed: 61.529821ms
Feb  5 22:14:56.462: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070407757s
Feb  5 22:14:58.476: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084522663s
Feb  5 22:15:00.607: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215143555s
Feb  5 22:15:02.618: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Pending", Reason="", readiness=false. Elapsed: 8.226015436s
Feb  5 22:15:04.639: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.247464868s
STEP: Saw pod success
Feb  5 22:15:04.640: INFO: Pod "downwardapi-volume-21828037-af09-402a-b382-7113acac1284" satisfied condition "success or failure"
Feb  5 22:15:04.646: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-21828037-af09-402a-b382-7113acac1284 container client-container: 
STEP: delete the pod
Feb  5 22:15:04.822: INFO: Waiting for pod downwardapi-volume-21828037-af09-402a-b382-7113acac1284 to disappear
Feb  5 22:15:04.825: INFO: Pod downwardapi-volume-21828037-af09-402a-b382-7113acac1284 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:15:04.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9363" for this suite.

• [SLOW TEST:10.537 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:15:04.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  5 22:15:04.975: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  5 22:15:04.998: INFO: Waiting for terminating namespaces to be deleted...
Feb  5 22:15:05.000: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  5 22:15:05.006: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.006: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:15:05.006: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  5 22:15:05.006: INFO: 	Container weave ready: true, restart count 1
Feb  5 22:15:05.006: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:15:05.006: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  5 22:15:05.023: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:15:05.023: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container weave ready: true, restart count 0
Feb  5 22:15:05.023: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:15:05.023: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  5 22:15:05.023: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container kube-scheduler ready: true, restart count 5
Feb  5 22:15:05.023: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container etcd ready: true, restart count 1
Feb  5 22:15:05.023: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  5 22:15:05.023: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container coredns ready: true, restart count 0
Feb  5 22:15:05.023: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:15:05.023: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8a05dfa3-7d1e-4d71-ba2e-d0ab20d491ab 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8a05dfa3-7d1e-4d71-ba2e-d0ab20d491ab off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8a05dfa3-7d1e-4d71-ba2e-d0ab20d491ab
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:15:21.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7379" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.643 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":176,"skipped":2606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:15:21.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:15:21.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb  5 22:15:21.822: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:21Z generation:1 name:name1 resourceVersion:6618666 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:791c9729-706a-4154-8f7b-a9cffa9096d0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb  5 22:15:31.845: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:31Z generation:1 name:name2 resourceVersion:6618704 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5d5d1f13-0abd-4c36-bdc5-292198ca7e46] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb  5 22:15:41.855: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:21Z generation:2 name:name1 resourceVersion:6618729 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:791c9729-706a-4154-8f7b-a9cffa9096d0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb  5 22:15:51.866: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:31Z generation:2 name:name2 resourceVersion:6618753 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5d5d1f13-0abd-4c36-bdc5-292198ca7e46] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb  5 22:16:01.885: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:21Z generation:2 name:name1 resourceVersion:6618777 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:791c9729-706a-4154-8f7b-a9cffa9096d0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb  5 22:16:11.897: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-05T22:15:31Z generation:2 name:name2 resourceVersion:6618801 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:5d5d1f13-0abd-4c36-bdc5-292198ca7e46] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:16:22.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-4376" for this suite.

• [SLOW TEST:60.942 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":177,"skipped":2658,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:16:22.456: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb  5 22:16:22.694: INFO: Number of nodes with available pods: 0
Feb  5 22:16:22.694: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:23.718: INFO: Number of nodes with available pods: 0
Feb  5 22:16:23.718: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:24.945: INFO: Number of nodes with available pods: 0
Feb  5 22:16:24.945: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:25.705: INFO: Number of nodes with available pods: 0
Feb  5 22:16:25.706: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:26.704: INFO: Number of nodes with available pods: 0
Feb  5 22:16:26.704: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:27.805: INFO: Number of nodes with available pods: 0
Feb  5 22:16:27.805: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:28.706: INFO: Number of nodes with available pods: 0
Feb  5 22:16:28.706: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:29.889: INFO: Number of nodes with available pods: 0
Feb  5 22:16:29.889: INFO: Node jerma-node is running more than one daemon pod
Feb  5 22:16:30.747: INFO: Number of nodes with available pods: 1
Feb  5 22:16:30.747: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:31.778: INFO: Number of nodes with available pods: 2
Feb  5 22:16:31.778: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb  5 22:16:31.819: INFO: Number of nodes with available pods: 1
Feb  5 22:16:31.819: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:32.832: INFO: Number of nodes with available pods: 1
Feb  5 22:16:32.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:33.841: INFO: Number of nodes with available pods: 1
Feb  5 22:16:33.841: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:34.832: INFO: Number of nodes with available pods: 1
Feb  5 22:16:34.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:35.840: INFO: Number of nodes with available pods: 1
Feb  5 22:16:35.840: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:36.872: INFO: Number of nodes with available pods: 1
Feb  5 22:16:36.872: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:37.872: INFO: Number of nodes with available pods: 1
Feb  5 22:16:37.872: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:38.832: INFO: Number of nodes with available pods: 1
Feb  5 22:16:38.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:39.843: INFO: Number of nodes with available pods: 1
Feb  5 22:16:39.843: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:40.837: INFO: Number of nodes with available pods: 1
Feb  5 22:16:40.838: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:41.835: INFO: Number of nodes with available pods: 1
Feb  5 22:16:41.835: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:42.831: INFO: Number of nodes with available pods: 1
Feb  5 22:16:42.831: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:43.832: INFO: Number of nodes with available pods: 1
Feb  5 22:16:43.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:44.829: INFO: Number of nodes with available pods: 1
Feb  5 22:16:44.829: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:45.833: INFO: Number of nodes with available pods: 1
Feb  5 22:16:45.833: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:47.473: INFO: Number of nodes with available pods: 1
Feb  5 22:16:47.473: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:47.857: INFO: Number of nodes with available pods: 1
Feb  5 22:16:47.857: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:48.829: INFO: Number of nodes with available pods: 1
Feb  5 22:16:48.829: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb  5 22:16:49.831: INFO: Number of nodes with available pods: 2
Feb  5 22:16:49.831: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2257, will wait for the garbage collector to delete the pods
Feb  5 22:16:49.930: INFO: Deleting DaemonSet.extensions daemon-set took: 43.706327ms
Feb  5 22:16:51.730: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.800454776s
Feb  5 22:17:03.238: INFO: Number of nodes with available pods: 0
Feb  5 22:17:03.238: INFO: Number of running nodes: 0, number of available pods: 0
Feb  5 22:17:03.243: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2257/daemonsets","resourceVersion":"6618977"},"items":null}

Feb  5 22:17:03.248: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2257/pods","resourceVersion":"6618977"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:17:03.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2257" for this suite.

• [SLOW TEST:40.817 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":178,"skipped":2659,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:17:03.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:17:04.348: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb  5 22:17:06.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:17:08.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:17:10.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:17:12.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716537824, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:17:15.390: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:17:15.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:17:16.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5028" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:13.126 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":179,"skipped":2666,"failed":0}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:17:16.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-330f5bef-5c06-4fa7-8107-ece950597d6c
STEP: Creating a pod to test consume configMaps
Feb  5 22:17:16.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6" in namespace "configmap-5259" to be "success or failure"
Feb  5 22:17:16.782: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Pending", Reason="", readiness=false. Elapsed: 33.031971ms
Feb  5 22:17:18.787: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037934226s
Feb  5 22:17:20.795: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045382509s
Feb  5 22:17:22.803: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053193814s
Feb  5 22:17:24.815: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065778442s
Feb  5 22:17:27.206: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.457050282s
STEP: Saw pod success
Feb  5 22:17:27.207: INFO: Pod "pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6" satisfied condition "success or failure"
Feb  5 22:17:27.212: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6 container configmap-volume-test: 
STEP: delete the pod
Feb  5 22:17:27.415: INFO: Waiting for pod pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6 to disappear
Feb  5 22:17:27.422: INFO: Pod pod-configmaps-d696e95f-8a0a-4239-9a72-f13b4858e6c6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:17:27.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5259" for this suite.

• [SLOW TEST:11.037 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2673,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:17:27.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-f2f4a112-c348-48c8-84f4-b995e977fe7f
STEP: Creating a pod to test consume secrets
Feb  5 22:17:27.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f" in namespace "projected-7294" to be "success or failure"
Feb  5 22:17:27.678: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.75394ms
Feb  5 22:17:29.688: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022803587s
Feb  5 22:17:31.694: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029306979s
Feb  5 22:17:33.705: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03970498s
Feb  5 22:17:35.716: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051155434s
STEP: Saw pod success
Feb  5 22:17:35.716: INFO: Pod "pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f" satisfied condition "success or failure"
Feb  5 22:17:35.723: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 22:17:35.839: INFO: Waiting for pod pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f to disappear
Feb  5 22:17:35.851: INFO: Pod pod-projected-secrets-c4bebd61-b939-4f15-8a05-ffdc30c9255f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:17:35.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7294" for this suite.

• [SLOW TEST:8.428 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:17:35.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name cm-test-opt-del-8c9e33fa-075b-4a8d-aa49-3bb569d92d45
STEP: Creating configMap with name cm-test-opt-upd-55a2cccf-b60f-49eb-b089-f63789b3b4d9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8c9e33fa-075b-4a8d-aa49-3bb569d92d45
STEP: Updating configmap cm-test-opt-upd-55a2cccf-b60f-49eb-b089-f63789b3b4d9
STEP: Creating configMap with name cm-test-opt-create-389e788e-e066-494b-a343-12d14ff1129d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:17:48.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-876" for this suite.

• [SLOW TEST:12.754 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2719,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:17:48.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb  5 22:18:04.956: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  5 22:18:04.961: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  5 22:18:06.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  5 22:18:06.980: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  5 22:18:08.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  5 22:18:08.967: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  5 22:18:10.961: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  5 22:18:10.968: INFO: Pod pod-with-prestop-exec-hook still exists
Feb  5 22:18:12.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb  5 22:18:12.968: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:18:12.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4223" for this suite.

• [SLOW TEST:24.377 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2727,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:18:12.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9191
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Feb  5 22:18:13.330: INFO: Found 0 stateful pods, waiting for 3
Feb  5 22:18:23.336: INFO: Found 2 stateful pods, waiting for 3
Feb  5 22:18:33.340: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:18:33.340: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:18:33.340: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  5 22:18:43.339: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:18:43.340: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:18:43.340: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb  5 22:18:43.382: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  5 22:18:53.480: INFO: Updating stateful set ss2
Feb  5 22:18:53.522: INFO: Waiting for Pod statefulset-9191/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb  5 22:19:03.958: INFO: Found 2 stateful pods, waiting for 3
Feb  5 22:19:13.965: INFO: Found 2 stateful pods, waiting for 3
Feb  5 22:19:23.991: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:19:23.991: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  5 22:19:23.991: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  5 22:19:24.018: INFO: Updating stateful set ss2
Feb  5 22:19:24.058: INFO: Waiting for Pod statefulset-9191/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  5 22:19:34.086: INFO: Updating stateful set ss2
Feb  5 22:19:34.388: INFO: Waiting for StatefulSet statefulset-9191/ss2 to complete update
Feb  5 22:19:34.389: INFO: Waiting for Pod statefulset-9191/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  5 22:19:44.403: INFO: Waiting for StatefulSet statefulset-9191/ss2 to complete update
Feb  5 22:19:44.403: INFO: Waiting for Pod statefulset-9191/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb  5 22:19:54.398: INFO: Waiting for StatefulSet statefulset-9191/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  5 22:20:04.399: INFO: Deleting all statefulset in ns statefulset-9191
Feb  5 22:20:04.402: INFO: Scaling statefulset ss2 to 0
Feb  5 22:20:34.427: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 22:20:34.431: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:20:34.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9191" for this suite.

• [SLOW TEST:141.512 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":184,"skipped":2733,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:20:34.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 22:20:42.798: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:20:42.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8684" for this suite.

• [SLOW TEST:8.323 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2733,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:20:42.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  5 22:20:43.197: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6619995 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 22:20:43.197: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6619996 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  5 22:20:43.198: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6619997 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  5 22:20:53.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6620028 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 22:20:53.230: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6620029 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  5 22:20:53.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-8504 /api/v1/namespaces/watch-8504/configmaps/e2e-watch-test-label-changed 05affcb9-5c18-48c0-b881-8a3211eb222d 6620030 0 2020-02-05 22:20:43 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:20:53.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8504" for this suite.

• [SLOW TEST:10.407 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":186,"skipped":2738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:20:53.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:20:53.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5030'
Feb  5 22:20:53.655: INFO: stderr: ""
Feb  5 22:20:53.655: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb  5 22:20:53.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5030'
Feb  5 22:20:53.970: INFO: stderr: ""
Feb  5 22:20:53.970: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  5 22:20:54.977: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:54.977: INFO: Found 0 / 1
Feb  5 22:20:55.977: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:55.977: INFO: Found 0 / 1
Feb  5 22:20:56.975: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:56.976: INFO: Found 0 / 1
Feb  5 22:20:57.977: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:57.977: INFO: Found 0 / 1
Feb  5 22:20:58.976: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:58.976: INFO: Found 0 / 1
Feb  5 22:20:59.977: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:20:59.977: INFO: Found 0 / 1
Feb  5 22:21:00.995: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:21:00.995: INFO: Found 0 / 1
Feb  5 22:21:02.018: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:21:02.018: INFO: Found 0 / 1
Feb  5 22:21:02.980: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:21:02.980: INFO: Found 1 / 1
Feb  5 22:21:02.980: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  5 22:21:02.985: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:21:02.985: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  5 22:21:02.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-rdxnd --namespace=kubectl-5030'
Feb  5 22:21:03.113: INFO: stderr: ""
Feb  5 22:21:03.113: INFO: stdout: "Name:         agnhost-master-rdxnd\nNamespace:    kubectl-5030\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Wed, 05 Feb 2020 22:20:53 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://9ccd43214f643b29ed69e5119b07c5c249ae7b6cf317b3933d13afa85d9c5052\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 05 Feb 2020 22:21:00 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-lkg2t (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-lkg2t:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-lkg2t\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5030/agnhost-master-rdxnd to jerma-node\n  Normal  Pulled     6s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    3s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    3s         kubelet, jerma-node  Started container agnhost-master\n"
Feb  5 22:21:03.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5030'
Feb  5 22:21:03.243: INFO: stderr: ""
Feb  5 22:21:03.243: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5030\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: agnhost-master-rdxnd\n"
Feb  5 22:21:03.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5030'
Feb  5 22:21:03.350: INFO: stderr: ""
Feb  5 22:21:03.350: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5030\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.56.44\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  5 22:21:03.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb  5 22:21:03.466: INFO: stderr: ""
Feb  5 22:21:03.467: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Wed, 05 Feb 2020 22:20:55 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 05 Feb 2020 22:20:13 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 05 Feb 2020 22:20:13 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 05 Feb 2020 22:20:13 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 05 Feb 2020 22:20:13 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         32d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         32d\n  kubectl-5030                agnhost-master-rdxnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  5 22:21:03.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5030'
Feb  5 22:21:03.565: INFO: stderr: ""
Feb  5 22:21:03.566: INFO: stdout: "Name:         kubectl-5030\nLabels:       e2e-framework=kubectl\n              e2e-run=f2161c11-d3e7-47a3-aafe-b2fe567d349b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:21:03.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5030" for this suite.

• [SLOW TEST:10.334 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1134
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":187,"skipped":2761,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:21:03.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:21:03.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3437" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":188,"skipped":2768,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:21:03.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-2272
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2272 to expose endpoints map[]
Feb  5 22:21:03.827: INFO: Get endpoints failed (8.930976ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  5 22:21:04.837: INFO: successfully validated that service multi-endpoint-test in namespace services-2272 exposes endpoints map[] (1.018174847s elapsed)
STEP: Creating pod pod1 in namespace services-2272
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2272 to expose endpoints map[pod1:[100]]
Feb  5 22:21:08.970: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.116201354s elapsed, will retry)
Feb  5 22:21:12.010: INFO: successfully validated that service multi-endpoint-test in namespace services-2272 exposes endpoints map[pod1:[100]] (7.155943732s elapsed)
STEP: Creating pod pod2 in namespace services-2272
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2272 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  5 22:21:16.642: INFO: Unexpected endpoints: found map[01841dbb-892d-4226-9e97-58c81a58d8e2:[100]], expected map[pod1:[100] pod2:[101]] (4.627155147s elapsed, will retry)
Feb  5 22:21:19.698: INFO: successfully validated that service multi-endpoint-test in namespace services-2272 exposes endpoints map[pod1:[100] pod2:[101]] (7.683901924s elapsed)
STEP: Deleting pod pod1 in namespace services-2272
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2272 to expose endpoints map[pod2:[101]]
Feb  5 22:21:19.753: INFO: successfully validated that service multi-endpoint-test in namespace services-2272 exposes endpoints map[pod2:[101]] (46.202656ms elapsed)
STEP: Deleting pod pod2 in namespace services-2272
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2272 to expose endpoints map[]
Feb  5 22:21:20.816: INFO: successfully validated that service multi-endpoint-test in namespace services-2272 exposes endpoints map[] (1.013714169s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:21:20.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2272" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:17.249 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":189,"skipped":2776,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:21:20.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-3805c1d8-d1a3-45f6-bfca-24a7b83fe39f
STEP: Creating a pod to test consume configMaps
Feb  5 22:21:21.162: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee" in namespace "projected-1045" to be "success or failure"
Feb  5 22:21:21.171: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Pending", Reason="", readiness=false. Elapsed: 9.56289ms
Feb  5 22:21:23.238: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076251346s
Feb  5 22:21:25.246: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084777041s
Feb  5 22:21:27.315: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.153860081s
Feb  5 22:21:29.346: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184550542s
Feb  5 22:21:31.355: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.193274472s
STEP: Saw pod success
Feb  5 22:21:31.355: INFO: Pod "pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee" satisfied condition "success or failure"
Feb  5 22:21:31.359: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 22:21:31.601: INFO: Waiting for pod pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee to disappear
Feb  5 22:21:31.613: INFO: Pod pod-projected-configmaps-2f130a95-97ad-45e0-8148-3c1b8a0f27ee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:21:31.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1045" for this suite.

• [SLOW TEST:10.688 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:21:31.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
STEP: creating an pod
Feb  5 22:21:31.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-9659 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb  5 22:21:32.028: INFO: stderr: ""
Feb  5 22:21:32.029: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Waiting for log generator to start.
Feb  5 22:21:32.029: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb  5 22:21:32.029: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9659" to be "running and ready, or succeeded"
Feb  5 22:21:32.116: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 86.979741ms
Feb  5 22:21:34.120: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091119657s
Feb  5 22:21:36.126: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096903407s
Feb  5 22:21:38.137: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108508503s
Feb  5 22:21:40.145: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.116389143s
Feb  5 22:21:40.146: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb  5 22:21:40.146: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb  5 22:21:40.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659'
Feb  5 22:21:40.324: INFO: stderr: ""
Feb  5 22:21:40.324: INFO: stdout: "I0205 22:21:37.717656       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/cmvd 538\nI0205 22:21:37.918082       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z9zg 278\nI0205 22:21:38.118180       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/fj4 411\nI0205 22:21:38.318113       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/67q7 434\nI0205 22:21:38.518098       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fxs 385\nI0205 22:21:38.718313       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/hqh 351\nI0205 22:21:38.917951       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xx2 435\nI0205 22:21:39.117920       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/v2bc 249\nI0205 22:21:39.318016       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/qxzm 491\nI0205 22:21:39.518071       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/kql 281\nI0205 22:21:39.718091       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/vtqv 351\nI0205 22:21:39.917861       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/7hk 249\nI0205 22:21:40.117959       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/ckzq 273\n"
STEP: limiting log lines
Feb  5 22:21:40.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659 --tail=1'
Feb  5 22:21:40.528: INFO: stderr: ""
Feb  5 22:21:40.529: INFO: stdout: "I0205 22:21:40.317937       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/srmq 340\n"
Feb  5 22:21:40.529: INFO: got output "I0205 22:21:40.317937       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/srmq 340\n"
STEP: limiting log bytes
Feb  5 22:21:40.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659 --limit-bytes=1'
Feb  5 22:21:40.632: INFO: stderr: ""
Feb  5 22:21:40.632: INFO: stdout: "I"
Feb  5 22:21:40.632: INFO: got output "I"
STEP: exposing timestamps
Feb  5 22:21:40.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659 --tail=1 --timestamps'
Feb  5 22:21:40.774: INFO: stderr: ""
Feb  5 22:21:40.775: INFO: stdout: "2020-02-05T22:21:40.719234992Z I0205 22:21:40.718339       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/h7r5 225\n"
Feb  5 22:21:40.775: INFO: got output "2020-02-05T22:21:40.719234992Z I0205 22:21:40.718339       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/h7r5 225\n"
STEP: restricting to a time range
Feb  5 22:21:43.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659 --since=1s'
Feb  5 22:21:43.497: INFO: stderr: ""
Feb  5 22:21:43.497: INFO: stdout: "I0205 22:21:42.518146       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/968g 203\nI0205 22:21:42.717863       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/vlw 518\nI0205 22:21:42.917928       1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/g2k4 293\nI0205 22:21:43.117948       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/nps 423\nI0205 22:21:43.317942       1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/bwf 289\n"
Feb  5 22:21:43.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9659 --since=24h'
Feb  5 22:21:43.744: INFO: stderr: ""
Feb  5 22:21:43.744: INFO: stdout: "I0205 22:21:37.717656       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/cmvd 538\nI0205 22:21:37.918082       1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/z9zg 278\nI0205 22:21:38.118180       1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/fj4 411\nI0205 22:21:38.318113       1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/67q7 434\nI0205 22:21:38.518098       1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/fxs 385\nI0205 22:21:38.718313       1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/hqh 351\nI0205 22:21:38.917951       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/xx2 435\nI0205 22:21:39.117920       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/v2bc 249\nI0205 22:21:39.318016       1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/qxzm 491\nI0205 22:21:39.518071       1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/kql 281\nI0205 22:21:39.718091       1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/vtqv 351\nI0205 22:21:39.917861       1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/7hk 249\nI0205 22:21:40.117959       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/ckzq 273\nI0205 22:21:40.317937       1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/srmq 340\nI0205 22:21:40.517993       1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/hx7 448\nI0205 22:21:40.718339       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/h7r5 225\nI0205 22:21:40.917944       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/kube-system/pods/6294 398\nI0205 22:21:41.118065       1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/9gqv 331\nI0205 22:21:41.317977       1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/ljf 313\nI0205 22:21:41.518118       1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/cfs6 581\nI0205 22:21:41.718202       1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/8m6q 436\nI0205 22:21:41.917977       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/rj2 327\nI0205 22:21:42.118775       1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/stp 414\nI0205 22:21:42.317889       1 logs_generator.go:76] 23 POST /api/v1/namespaces/default/pods/xpbq 365\nI0205 22:21:42.518146       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/968g 203\nI0205 22:21:42.717863       1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/vlw 518\nI0205 22:21:42.917928       1 logs_generator.go:76] 26 POST /api/v1/namespaces/default/pods/g2k4 293\nI0205 22:21:43.117948       1 logs_generator.go:76] 27 GET /api/v1/namespaces/kube-system/pods/nps 423\nI0205 22:21:43.317942       1 logs_generator.go:76] 28 GET /api/v1/namespaces/kube-system/pods/bwf 289\nI0205 22:21:43.517820       1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/qxjv 365\nI0205 22:21:43.718050       1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/vg44 204\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
Feb  5 22:21:43.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9659'
Feb  5 22:21:48.745: INFO: stderr: ""
Feb  5 22:21:48.745: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:21:48.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9659" for this suite.

• [SLOW TEST:17.137 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1440
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":278,"completed":191,"skipped":2831,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:21:48.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-7a41305b-128a-4698-9df8-7ccf36abbc82 in namespace container-probe-1048
Feb  5 22:21:56.936: INFO: Started pod busybox-7a41305b-128a-4698-9df8-7ccf36abbc82 in namespace container-probe-1048
STEP: checking the pod's current state and verifying that restartCount is present
Feb  5 22:21:56.940: INFO: Initial restart count of pod busybox-7a41305b-128a-4698-9df8-7ccf36abbc82 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:25:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1048" for this suite.

• [SLOW TEST:249.777 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":2840,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:25:58.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  5 22:25:58.652: INFO: Waiting up to 5m0s for pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260" in namespace "downward-api-7722" to be "success or failure"
Feb  5 22:25:58.660: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260": Phase="Pending", Reason="", readiness=false. Elapsed: 7.736758ms
Feb  5 22:26:00.669: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016811128s
Feb  5 22:26:02.675: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022447139s
Feb  5 22:26:04.692: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039449472s
Feb  5 22:26:06.697: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045036762s
STEP: Saw pod success
Feb  5 22:26:06.698: INFO: Pod "downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260" satisfied condition "success or failure"
Feb  5 22:26:06.700: INFO: Trying to get logs from node jerma-node pod downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260 container dapi-container: 
STEP: delete the pod
Feb  5 22:26:06.800: INFO: Waiting for pod downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260 to disappear
Feb  5 22:26:06.805: INFO: Pod downward-api-70c7a763-70ba-427d-ba97-a4ad3782f260 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:26:06.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7722" for this suite.

• [SLOW TEST:8.305 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":2859,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:26:06.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-ff552bee-adde-4ee3-9f69-e54d97a0e7f6
STEP: Creating a pod to test consume configMaps
Feb  5 22:26:07.093: INFO: Waiting up to 5m0s for pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a" in namespace "configmap-5407" to be "success or failure"
Feb  5 22:26:07.147: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 54.080334ms
Feb  5 22:26:09.154: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060721882s
Feb  5 22:26:11.182: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088727182s
Feb  5 22:26:13.208: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114710352s
Feb  5 22:26:15.277: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184348339s
STEP: Saw pod success
Feb  5 22:26:15.277: INFO: Pod "pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a" satisfied condition "success or failure"
Feb  5 22:26:15.300: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a container configmap-volume-test: 
STEP: delete the pod
Feb  5 22:26:15.626: INFO: Waiting for pod pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a to disappear
Feb  5 22:26:15.633: INFO: Pod pod-configmaps-a31e89ec-ff5f-4045-bf52-8c8729f79a4a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:26:15.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5407" for this suite.

• [SLOW TEST:8.810 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":2883,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:26:15.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  5 22:26:22.260: INFO: 0 pods remaining
Feb  5 22:26:22.260: INFO: 0 pods has nil DeletionTimestamp
Feb  5 22:26:22.260: INFO: 
STEP: Gathering metrics
W0205 22:26:23.454085       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  5 22:26:23.454: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:26:23.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-205" for this suite.

• [SLOW TEST:7.999 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":195,"skipped":2887,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:26:23.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Feb  5 22:26:40.812: INFO: Successfully updated pod "annotationupdate1f73714e-a94b-468b-8086-289c0798ac30"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:26:42.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8164" for this suite.

• [SLOW TEST:19.255 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":2922,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:26:42.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  5 22:26:43.068: INFO: Waiting up to 5m0s for pod "pod-3799e738-48d5-4630-8bf7-4e2725103032" in namespace "emptydir-7936" to be "success or failure"
Feb  5 22:26:43.105: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Pending", Reason="", readiness=false. Elapsed: 37.381662ms
Feb  5 22:26:45.116: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048192605s
Feb  5 22:26:47.121: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053471881s
Feb  5 22:26:49.128: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059781656s
Feb  5 22:26:51.136: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068122177s
Feb  5 22:26:53.141: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073507837s
STEP: Saw pod success
Feb  5 22:26:53.141: INFO: Pod "pod-3799e738-48d5-4630-8bf7-4e2725103032" satisfied condition "success or failure"
Feb  5 22:26:53.144: INFO: Trying to get logs from node jerma-node pod pod-3799e738-48d5-4630-8bf7-4e2725103032 container test-container: 
STEP: delete the pod
Feb  5 22:26:54.135: INFO: Waiting for pod pod-3799e738-48d5-4630-8bf7-4e2725103032 to disappear
Feb  5 22:26:54.142: INFO: Pod pod-3799e738-48d5-4630-8bf7-4e2725103032 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:26:54.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7936" for this suite.

• [SLOW TEST:11.251 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":2960,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:26:54.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Feb  5 22:27:02.452: INFO: Pod pod-hostip-2a0b07f2-6dac-49b9-82ee-847b07670cd5 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:27:02.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1035" for this suite.

• [SLOW TEST:8.297 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":2961,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:27:02.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  5 22:27:02.556: INFO: Waiting up to 5m0s for pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb" in namespace "emptydir-5154" to be "success or failure"
Feb  5 22:27:02.566: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.429269ms
Feb  5 22:27:04.583: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026630065s
Feb  5 22:27:06.591: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035431672s
Feb  5 22:27:08.603: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046511984s
Feb  5 22:27:10.609: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052840113s
Feb  5 22:27:12.620: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063765386s
STEP: Saw pod success
Feb  5 22:27:12.620: INFO: Pod "pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb" satisfied condition "success or failure"
Feb  5 22:27:12.623: INFO: Trying to get logs from node jerma-node pod pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb container test-container: 
STEP: delete the pod
Feb  5 22:27:12.747: INFO: Waiting for pod pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb to disappear
Feb  5 22:27:12.756: INFO: Pod pod-7f641ee9-f10e-4a9e-aa63-c8494d92defb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:27:12.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5154" for this suite.

• [SLOW TEST:10.300 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3033,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:27:12.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:27:13.363: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb  5 22:27:15.383: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:27:17.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:27:19.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:27:21.389: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538433, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:27:24.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:27:24.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8092" for this suite.
STEP: Destroying namespace "webhook-8092-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.100 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":200,"skipped":3034,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:27:24.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0205 22:27:55.553247       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  5 22:27:55.553: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:27:55.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3820" for this suite.

• [SLOW TEST:30.699 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":201,"skipped":3089,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:27:55.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Feb  5 22:27:55.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3864'
Feb  5 22:27:57.890: INFO: stderr: ""
Feb  5 22:27:57.890: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  5 22:27:57.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3864'
Feb  5 22:27:58.224: INFO: stderr: ""
Feb  5 22:27:58.224: INFO: stdout: "update-demo-nautilus-vw8lg update-demo-nautilus-zppj2 "
Feb  5 22:27:58.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vw8lg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:27:58.377: INFO: stderr: ""
Feb  5 22:27:58.377: INFO: stdout: ""
Feb  5 22:27:58.377: INFO: update-demo-nautilus-vw8lg is created but not running
Feb  5 22:28:03.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3864'
Feb  5 22:28:03.550: INFO: stderr: ""
Feb  5 22:28:03.550: INFO: stdout: "update-demo-nautilus-vw8lg update-demo-nautilus-zppj2 "
Feb  5 22:28:03.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vw8lg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:03.719: INFO: stderr: ""
Feb  5 22:28:03.719: INFO: stdout: ""
Feb  5 22:28:03.719: INFO: update-demo-nautilus-vw8lg is created but not running
Feb  5 22:28:08.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3864'
Feb  5 22:28:08.885: INFO: stderr: ""
Feb  5 22:28:08.885: INFO: stdout: "update-demo-nautilus-vw8lg update-demo-nautilus-zppj2 "
Feb  5 22:28:08.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vw8lg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:08.979: INFO: stderr: ""
Feb  5 22:28:08.979: INFO: stdout: "true"
Feb  5 22:28:08.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vw8lg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:09.130: INFO: stderr: ""
Feb  5 22:28:09.130: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:28:09.130: INFO: validating pod update-demo-nautilus-vw8lg
Feb  5 22:28:09.145: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:28:09.146: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:28:09.146: INFO: update-demo-nautilus-vw8lg is verified up and running
Feb  5 22:28:09.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zppj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:09.285: INFO: stderr: ""
Feb  5 22:28:09.285: INFO: stdout: "true"
Feb  5 22:28:09.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zppj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:09.431: INFO: stderr: ""
Feb  5 22:28:09.431: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:28:09.431: INFO: validating pod update-demo-nautilus-zppj2
Feb  5 22:28:09.439: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:28:09.439: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:28:09.439: INFO: update-demo-nautilus-zppj2 is verified up and running
STEP: rolling-update to new replication controller
Feb  5 22:28:09.442: INFO: scanned /root for discovery docs: 
Feb  5 22:28:09.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3864'
Feb  5 22:28:38.284: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  5 22:28:38.284: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  5 22:28:38.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3864'
Feb  5 22:28:38.459: INFO: stderr: ""
Feb  5 22:28:38.460: INFO: stdout: "update-demo-kitten-q8dct update-demo-kitten-tzgg4 "
Feb  5 22:28:38.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q8dct -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:38.557: INFO: stderr: ""
Feb  5 22:28:38.558: INFO: stdout: "true"
Feb  5 22:28:38.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q8dct -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:38.636: INFO: stderr: ""
Feb  5 22:28:38.636: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  5 22:28:38.636: INFO: validating pod update-demo-kitten-q8dct
Feb  5 22:28:38.646: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  5 22:28:38.646: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  5 22:28:38.646: INFO: update-demo-kitten-q8dct is verified up and running
Feb  5 22:28:38.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzgg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:38.741: INFO: stderr: ""
Feb  5 22:28:38.741: INFO: stdout: "true"
Feb  5 22:28:38.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzgg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3864'
Feb  5 22:28:38.840: INFO: stderr: ""
Feb  5 22:28:38.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  5 22:28:38.841: INFO: validating pod update-demo-kitten-tzgg4
Feb  5 22:28:38.920: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  5 22:28:38.920: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  5 22:28:38.920: INFO: update-demo-kitten-tzgg4 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:28:38.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3864" for this suite.

• [SLOW TEST:43.364 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":202,"skipped":3112,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:28:38.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:28:51.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4854" for this suite.

• [SLOW TEST:12.301 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3155,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:28:51.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-6cd09baf-4643-4eaf-9c79-d4319bf8f489
STEP: Creating a pod to test consume configMaps
Feb  5 22:28:51.386: INFO: Waiting up to 5m0s for pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e" in namespace "configmap-1706" to be "success or failure"
Feb  5 22:28:51.449: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e": Phase="Pending", Reason="", readiness=false. Elapsed: 61.970609ms
Feb  5 22:28:53.458: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071634879s
Feb  5 22:28:55.478: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090985543s
Feb  5 22:28:57.485: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098729695s
Feb  5 22:28:59.505: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11813298s
STEP: Saw pod success
Feb  5 22:28:59.505: INFO: Pod "pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e" satisfied condition "success or failure"
Feb  5 22:28:59.511: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e container configmap-volume-test: 
STEP: delete the pod
Feb  5 22:28:59.567: INFO: Waiting for pod pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e to disappear
Feb  5 22:28:59.573: INFO: Pod pod-configmaps-4daf39d7-491b-4d8b-91aa-b11987c1758e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:28:59.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1706" for this suite.

• [SLOW TEST:8.383 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3201,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:28:59.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:28:59.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  5 22:28:59.867: INFO: stderr: ""
Feb  5 22:28:59.867: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:10:40Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:28:59.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-274" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":205,"skipped":3215,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:28:59.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  5 22:29:14.082: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:14.089: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 22:29:16.090: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:16.097: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 22:29:18.090: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:18.095: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 22:29:20.090: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:20.098: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 22:29:22.090: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:22.098: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  5 22:29:24.090: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  5 22:29:24.097: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:29:24.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-776" for this suite.

• [SLOW TEST:24.224 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3227,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:29:24.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1841
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  5 22:29:24.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2360'
Feb  5 22:29:24.381: INFO: stderr: ""
Feb  5 22:29:24.381: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1846
Feb  5 22:29:24.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2360'
Feb  5 22:29:31.841: INFO: stderr: ""
Feb  5 22:29:31.841: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:29:31.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2360" for this suite.

• [SLOW TEST:7.787 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":207,"skipped":3228,"failed":0}
SSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:29:31.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:29:32.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5273" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":208,"skipped":3231,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:29:32.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Feb  5 22:29:32.200: INFO: Waiting up to 5m0s for pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd" in namespace "containers-9498" to be "success or failure"
Feb  5 22:29:32.203: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.103914ms
Feb  5 22:29:34.211: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010669847s
Feb  5 22:29:36.218: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017612156s
Feb  5 22:29:38.225: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024856894s
Feb  5 22:29:40.240: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039811676s
STEP: Saw pod success
Feb  5 22:29:40.240: INFO: Pod "client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd" satisfied condition "success or failure"
Feb  5 22:29:40.243: INFO: Trying to get logs from node jerma-node pod client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd container test-container: 
STEP: delete the pod
Feb  5 22:29:40.300: INFO: Waiting for pod client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd to disappear
Feb  5 22:29:40.312: INFO: Pod client-containers-d784c553-42ed-4e5e-b313-3e3eab119ebd no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:29:40.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9498" for this suite.

• [SLOW TEST:8.251 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3304,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:29:40.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0205 22:29:53.500591       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  5 22:29:53.500: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:29:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8817" for this suite.

• [SLOW TEST:13.366 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":210,"skipped":3414,"failed":0}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:29:53.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:30:16.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3285" for this suite.

• [SLOW TEST:22.771 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":211,"skipped":3416,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:30:16.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  5 22:30:16.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6714'
Feb  5 22:30:16.808: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  5 22:30:16.809: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Feb  5 22:30:16.945: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-cn5jw]
Feb  5 22:30:16.946: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-cn5jw" in namespace "kubectl-6714" to be "running and ready"
Feb  5 22:30:16.955: INFO: Pod "e2e-test-httpd-rc-cn5jw": Phase="Pending", Reason="", readiness=false. Elapsed: 9.166412ms
Feb  5 22:30:18.963: INFO: Pod "e2e-test-httpd-rc-cn5jw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016836953s
Feb  5 22:30:20.972: INFO: Pod "e2e-test-httpd-rc-cn5jw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025357158s
Feb  5 22:30:22.977: INFO: Pod "e2e-test-httpd-rc-cn5jw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03101207s
Feb  5 22:30:24.983: INFO: Pod "e2e-test-httpd-rc-cn5jw": Phase="Running", Reason="", readiness=true. Elapsed: 8.037194555s
Feb  5 22:30:24.983: INFO: Pod "e2e-test-httpd-rc-cn5jw" satisfied condition "running and ready"
Feb  5 22:30:24.983: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-cn5jw]
Feb  5 22:30:24.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6714'
Feb  5 22:30:25.141: INFO: stderr: ""
Feb  5 22:30:25.141: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Wed Feb 05 22:30:22.688235 2020] [mpm_event:notice] [pid 1:tid 140426036562792] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Feb 05 22:30:22.688305 2020] [core:notice] [pid 1:tid 140426036562792] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  5 22:30:25.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6714'
Feb  5 22:30:25.275: INFO: stderr: ""
Feb  5 22:30:25.275: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:30:25.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6714" for this suite.

• [SLOW TEST:8.815 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1608
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":212,"skipped":3439,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:30:25.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Feb  5 22:30:25.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3321'
Feb  5 22:30:25.652: INFO: stderr: ""
Feb  5 22:30:25.652: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  5 22:30:25.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:25.872: INFO: stderr: ""
Feb  5 22:30:25.872: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
Feb  5 22:30:25.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:26.006: INFO: stderr: ""
Feb  5 22:30:26.007: INFO: stdout: ""
Feb  5 22:30:26.007: INFO: update-demo-nautilus-g7p2d is created but not running
Feb  5 22:30:31.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:32.235: INFO: stderr: ""
Feb  5 22:30:32.235: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
Feb  5 22:30:32.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:32.561: INFO: stderr: ""
Feb  5 22:30:32.562: INFO: stdout: ""
Feb  5 22:30:32.562: INFO: update-demo-nautilus-g7p2d is created but not running
Feb  5 22:30:37.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:37.742: INFO: stderr: ""
Feb  5 22:30:37.742: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
Feb  5 22:30:37.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:37.867: INFO: stderr: ""
Feb  5 22:30:37.868: INFO: stdout: "true"
Feb  5 22:30:37.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:37.976: INFO: stderr: ""
Feb  5 22:30:37.976: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:30:37.976: INFO: validating pod update-demo-nautilus-g7p2d
Feb  5 22:30:37.982: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:30:37.982: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:30:37.982: INFO: update-demo-nautilus-g7p2d is verified up and running
Feb  5 22:30:37.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgbkc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:38.076: INFO: stderr: ""
Feb  5 22:30:38.076: INFO: stdout: "true"
Feb  5 22:30:38.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wgbkc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:38.171: INFO: stderr: ""
Feb  5 22:30:38.171: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:30:38.171: INFO: validating pod update-demo-nautilus-wgbkc
Feb  5 22:30:38.180: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:30:38.180: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:30:38.180: INFO: update-demo-nautilus-wgbkc is verified up and running
STEP: scaling down the replication controller
Feb  5 22:30:38.182: INFO: scanned /root for discovery docs: 
Feb  5 22:30:38.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3321'
Feb  5 22:30:39.301: INFO: stderr: ""
Feb  5 22:30:39.301: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  5 22:30:39.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:39.421: INFO: stderr: ""
Feb  5 22:30:39.422: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  5 22:30:44.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:44.550: INFO: stderr: ""
Feb  5 22:30:44.550: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  5 22:30:49.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:49.717: INFO: stderr: ""
Feb  5 22:30:49.717: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-wgbkc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  5 22:30:54.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:54.858: INFO: stderr: ""
Feb  5 22:30:54.858: INFO: stdout: "update-demo-nautilus-g7p2d "
Feb  5 22:30:54.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:54.980: INFO: stderr: ""
Feb  5 22:30:54.980: INFO: stdout: "true"
Feb  5 22:30:54.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:55.093: INFO: stderr: ""
Feb  5 22:30:55.093: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:30:55.093: INFO: validating pod update-demo-nautilus-g7p2d
Feb  5 22:30:55.097: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:30:55.097: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:30:55.097: INFO: update-demo-nautilus-g7p2d is verified up and running
STEP: scaling up the replication controller
Feb  5 22:30:55.099: INFO: scanned /root for discovery docs: 
Feb  5 22:30:55.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3321'
Feb  5 22:30:56.266: INFO: stderr: ""
Feb  5 22:30:56.267: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  5 22:30:56.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:30:56.450: INFO: stderr: ""
Feb  5 22:30:56.450: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-nnx28 "
Feb  5 22:30:56.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:56.581: INFO: stderr: ""
Feb  5 22:30:56.581: INFO: stdout: "true"
Feb  5 22:30:56.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:56.705: INFO: stderr: ""
Feb  5 22:30:56.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:30:56.705: INFO: validating pod update-demo-nautilus-g7p2d
Feb  5 22:30:56.711: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:30:56.712: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:30:56.712: INFO: update-demo-nautilus-g7p2d is verified up and running
Feb  5 22:30:56.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nnx28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:30:56.797: INFO: stderr: ""
Feb  5 22:30:56.797: INFO: stdout: ""
Feb  5 22:30:56.797: INFO: update-demo-nautilus-nnx28 is created but not running
Feb  5 22:31:01.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3321'
Feb  5 22:31:01.929: INFO: stderr: ""
Feb  5 22:31:01.929: INFO: stdout: "update-demo-nautilus-g7p2d update-demo-nautilus-nnx28 "
Feb  5 22:31:01.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:31:02.020: INFO: stderr: ""
Feb  5 22:31:02.020: INFO: stdout: "true"
Feb  5 22:31:02.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7p2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:31:02.106: INFO: stderr: ""
Feb  5 22:31:02.106: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:31:02.107: INFO: validating pod update-demo-nautilus-g7p2d
Feb  5 22:31:02.113: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:31:02.113: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:31:02.113: INFO: update-demo-nautilus-g7p2d is verified up and running
Feb  5 22:31:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nnx28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:31:02.209: INFO: stderr: ""
Feb  5 22:31:02.209: INFO: stdout: "true"
Feb  5 22:31:02.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nnx28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3321'
Feb  5 22:31:02.307: INFO: stderr: ""
Feb  5 22:31:02.307: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  5 22:31:02.307: INFO: validating pod update-demo-nautilus-nnx28
Feb  5 22:31:02.315: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  5 22:31:02.315: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  5 22:31:02.315: INFO: update-demo-nautilus-nnx28 is verified up and running
STEP: using delete to clean up resources
Feb  5 22:31:02.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3321'
Feb  5 22:31:02.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:31:02.408: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  5 22:31:02.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3321'
Feb  5 22:31:02.519: INFO: stderr: "No resources found in kubectl-3321 namespace.\n"
Feb  5 22:31:02.520: INFO: stdout: ""
Feb  5 22:31:02.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3321 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  5 22:31:02.617: INFO: stderr: ""
Feb  5 22:31:02.617: INFO: stdout: "update-demo-nautilus-g7p2d\nupdate-demo-nautilus-nnx28\n"
Feb  5 22:31:03.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3321'
Feb  5 22:31:03.243: INFO: stderr: "No resources found in kubectl-3321 namespace.\n"
Feb  5 22:31:03.243: INFO: stdout: ""
Feb  5 22:31:03.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3321 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  5 22:31:03.335: INFO: stderr: ""
Feb  5 22:31:03.335: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:31:03.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3321" for this suite.

• [SLOW TEST:38.060 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":213,"skipped":3440,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:31:03.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:31:04.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5" in namespace "projected-9838" to be "success or failure"
Feb  5 22:31:04.931: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 38.120022ms
Feb  5 22:31:06.976: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083575539s
Feb  5 22:31:08.984: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091283531s
Feb  5 22:31:10.991: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098337133s
Feb  5 22:31:13.011: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117634485s
Feb  5 22:31:15.016: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122804896s
STEP: Saw pod success
Feb  5 22:31:15.016: INFO: Pod "downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5" satisfied condition "success or failure"
Feb  5 22:31:15.019: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5 container client-container: 
STEP: delete the pod
Feb  5 22:31:15.144: INFO: Waiting for pod downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5 to disappear
Feb  5 22:31:15.152: INFO: Pod downwardapi-volume-cf5129f0-4173-432c-ac20-935139770ce5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:31:15.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9838" for this suite.

• [SLOW TEST:11.821 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3446,"failed":0}
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:31:15.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a in namespace container-probe-4261
Feb  5 22:31:23.397: INFO: Started pod liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a in namespace container-probe-4261
STEP: checking the pod's current state and verifying that restartCount is present
Feb  5 22:31:23.401: INFO: Initial restart count of pod liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is 0
Feb  5 22:31:39.468: INFO: Restart count of pod container-probe-4261/liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is now 1 (16.066717016s elapsed)
Feb  5 22:31:57.534: INFO: Restart count of pod container-probe-4261/liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is now 2 (34.132705483s elapsed)
Feb  5 22:32:19.676: INFO: Restart count of pod container-probe-4261/liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is now 3 (56.274909798s elapsed)
Feb  5 22:32:39.754: INFO: Restart count of pod container-probe-4261/liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is now 4 (1m16.352888884s elapsed)
Feb  5 22:33:40.352: INFO: Restart count of pod container-probe-4261/liveness-8dd070a5-9a37-49c7-a862-4626c22f9b8a is now 5 (2m16.95106116s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:33:40.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4261" for this suite.

• [SLOW TEST:145.255 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3446,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:33:40.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Feb  5 22:33:40.559: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix331384700/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:33:40.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-913" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":216,"skipped":3530,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:33:40.695: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1877
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb  5 22:33:40.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6299'
Feb  5 22:33:40.978: INFO: stderr: ""
Feb  5 22:33:40.978: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb  5 22:33:51.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6299 -o json'
Feb  5 22:33:51.206: INFO: stderr: ""
Feb  5 22:33:51.206: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-05T22:33:40Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-6299\",\n        \"resourceVersion\": \"6622950\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6299/pods/e2e-test-httpd-pod\",\n        \"uid\": \"eaa0608b-81f1-47bb-a974-e5dc20c9c5fb\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-nmgm7\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-nmgm7\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-nmgm7\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T22:33:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T22:33:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T22:33:49Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-05T22:33:40Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://94138f31776773296118f69cbb9a7debd07c004680f836190e946023e7a731b5\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-05T22:33:48Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-05T22:33:41Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  5 22:33:51.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6299'
Feb  5 22:33:51.589: INFO: stderr: ""
Feb  5 22:33:51.590: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1882
Feb  5 22:33:51.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6299'
Feb  5 22:33:56.478: INFO: stderr: ""
Feb  5 22:33:56.478: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:33:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6299" for this suite.

• [SLOW TEST:15.799 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1873
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":217,"skipped":3539,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:33:56.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-b2d418a3-c083-4d98-a29c-66a6249e4fc5
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:33:56.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9045" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":218,"skipped":3549,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:33:56.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:33:56.809: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:33:58.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3897" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":219,"skipped":3554,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:33:58.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:14.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3080" for this suite.

• [SLOW TEST:16.424 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":220,"skipped":3555,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:14.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:34:14.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:23.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1263" for this suite.

• [SLOW TEST:8.466 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3579,"failed":0}
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:23.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:34:23.494: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b2ce889d-896e-42da-b637-3f25204d6079", Controller:(*bool)(0xc0044b3402), BlockOwnerDeletion:(*bool)(0xc0044b3403)}}
Feb  5 22:34:23.503: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ceea1eb4-3441-4e14-83f1-ea8d55fd5522", Controller:(*bool)(0xc0045cb6ba), BlockOwnerDeletion:(*bool)(0xc0045cb6bb)}}
Feb  5 22:34:23.534: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9db1a1dd-b60a-45cd-9001-20b652ee84bf", Controller:(*bool)(0xc0044b358a), BlockOwnerDeletion:(*bool)(0xc0044b358b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:28.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8554" for this suite.

• [SLOW TEST:5.416 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":222,"skipped":3579,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:28.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-57484527-b43a-4f07-9986-0d29d34c4ee6
STEP: Creating a pod to test consume secrets
Feb  5 22:34:28.770: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0" in namespace "projected-2085" to be "success or failure"
Feb  5 22:34:28.791: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.885529ms
Feb  5 22:34:30.799: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027944862s
Feb  5 22:34:32.803: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032844468s
Feb  5 22:34:34.821: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050194938s
Feb  5 22:34:36.839: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06864042s
STEP: Saw pod success
Feb  5 22:34:36.840: INFO: Pod "pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0" satisfied condition "success or failure"
Feb  5 22:34:36.851: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0 container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 22:34:36.925: INFO: Waiting for pod pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0 to disappear
Feb  5 22:34:36.937: INFO: Pod pod-projected-secrets-8d536397-79fa-4366-a55b-be4cb71736d0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:36.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2085" for this suite.

• [SLOW TEST:8.400 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3580,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:37.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-1d64019c-4e67-4eeb-9270-27ff1a879d84
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:37.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-519" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":224,"skipped":3591,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:37.094: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  5 22:34:37.231: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  5 22:34:37.249: INFO: Waiting for terminating namespaces to be deleted...
Feb  5 22:34:37.285: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  5 22:34:37.294: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  5 22:34:37.294: INFO: 	Container weave ready: true, restart count 1
Feb  5 22:34:37.294: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:34:37.294: INFO: pod-exec-websocket-04e4ce89-590e-4b20-a249-873158c776e3 from pods-1263 started at 2020-02-05 22:34:15 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.294: INFO: 	Container main ready: true, restart count 0
Feb  5 22:34:37.294: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.294: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:34:37.294: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  5 22:34:37.317: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container coredns ready: true, restart count 0
Feb  5 22:34:37.317: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container coredns ready: true, restart count 0
Feb  5 22:34:37.317: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:34:37.317: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container weave ready: true, restart count 0
Feb  5 22:34:37.317: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:34:37.317: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  5 22:34:37.317: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container kube-scheduler ready: true, restart count 5
Feb  5 22:34:37.317: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container etcd ready: true, restart count 1
Feb  5 22:34:37.317: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:34:37.317: INFO: 	Container kube-apiserver ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f0a271728b5169], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:38.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9023" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":225,"skipped":3606,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:38.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-d5addd4d-6df3-4835-b54b-c82232b04ea7
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:48.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-141" for this suite.

• [SLOW TEST:10.366 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3651,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:48.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:34:48.849: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034" in namespace "security-context-test-8890" to be "success or failure"
Feb  5 22:34:48.866: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Pending", Reason="", readiness=false. Elapsed: 17.259637ms
Feb  5 22:34:50.872: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023094384s
Feb  5 22:34:52.893: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043820165s
Feb  5 22:34:54.904: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055488659s
Feb  5 22:34:56.915: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066069204s
Feb  5 22:34:58.921: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071874873s
Feb  5 22:34:58.921: INFO: Pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034" satisfied condition "success or failure"
Feb  5 22:34:59.193: INFO: Got logs for pod "busybox-privileged-false-d42e4694-ebb6-4b5c-8a88-48451eada034": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:59.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8890" for this suite.

• [SLOW TEST:10.489 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3699,"failed":0}
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:59.219: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Feb  5 22:34:59.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  5 22:34:59.654: INFO: stderr: ""
Feb  5 22:34:59.654: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:34:59.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5565" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":228,"skipped":3699,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:34:59.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:35:00.721: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  5 22:35:02.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:35:05.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:35:06.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:35:08.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716538900, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:35:11.864: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:35:11.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4127-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:13.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8739" for this suite.
STEP: Destroying namespace "webhook-8739-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:13.554 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":229,"skipped":3746,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:13.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:35:13.355: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb  5 22:35:18.359: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  5 22:35:22.377: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  5 22:35:22.422: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-4579 /apis/apps/v1/namespaces/deployment-4579/deployments/test-cleanup-deployment 98cce4f6-5f65-494e-9a72-a3ab76035149 6623536 1 2020-02-05 22:35:22 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00217f038  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Feb  5 22:35:22.502: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Feb  5 22:35:22.502: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  5 22:35:22.503: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-4579 /apis/apps/v1/namespaces/deployment-4579/replicasets/test-cleanup-controller 2d69c4ee-b51f-4c86-8f17-262982e7729c 6623537 1 2020-02-05 22:35:13 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 98cce4f6-5f65-494e-9a72-a3ab76035149 0xc00217f6f7 0xc00217f6f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00217f818  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  5 22:35:22.534: INFO: Pod "test-cleanup-controller-d2ft8" is available:
&Pod{ObjectMeta:{test-cleanup-controller-d2ft8 test-cleanup-controller- deployment-4579 /api/v1/namespaces/deployment-4579/pods/test-cleanup-controller-d2ft8 02584895-1b46-4ae4-bbd2-f4edd8c2d048 6623530 0 2020-02-05 22:35:13 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 2d69c4ee-b51f-4c86-8f17-262982e7729c 0xc0044ea0e7 0xc0044ea0e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hjq9n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hjq9n,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hjq9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:35:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:35:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:35:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-05 22:35:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:35:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://78bcac562f5ff64ccfc58429715db798cb9afb1443c0566d5940a9320a360de3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:22.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4579" for this suite.

• [SLOW TEST:10.654 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":230,"skipped":3765,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:23.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name projected-secret-test-7f6bd9dc-2a24-43b9-b980-a88f7cfbd69f
STEP: Creating a pod to test consume secrets
Feb  5 22:35:24.279: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2" in namespace "projected-1153" to be "success or failure"
Feb  5 22:35:24.479: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 200.537564ms
Feb  5 22:35:26.486: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206935512s
Feb  5 22:35:28.496: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216908077s
Feb  5 22:35:30.506: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.227706855s
Feb  5 22:35:32.518: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.239362391s
Feb  5 22:35:34.533: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.254598784s
Feb  5 22:35:36.554: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.275459146s
Feb  5 22:35:38.569: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.290625179s
STEP: Saw pod success
Feb  5 22:35:38.570: INFO: Pod "pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2" satisfied condition "success or failure"
Feb  5 22:35:38.577: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2 container secret-volume-test: 
STEP: delete the pod
Feb  5 22:35:38.623: INFO: Waiting for pod pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2 to disappear
Feb  5 22:35:38.682: INFO: Pod pod-projected-secrets-f7d1367f-3865-4b3f-86b8-55e4bb1709d2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:38.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1153" for this suite.

• [SLOW TEST:14.818 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:38.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:35:39.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d" in namespace "projected-9549" to be "success or failure"
Feb  5 22:35:39.055: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.000326ms
Feb  5 22:35:41.064: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043549359s
Feb  5 22:35:43.071: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049736907s
Feb  5 22:35:45.077: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056397251s
Feb  5 22:35:47.088: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066976022s
STEP: Saw pod success
Feb  5 22:35:47.088: INFO: Pod "downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d" satisfied condition "success or failure"
Feb  5 22:35:47.092: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d container client-container: 
STEP: delete the pod
Feb  5 22:35:47.151: INFO: Waiting for pod downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d to disappear
Feb  5 22:35:47.157: INFO: Pod downwardapi-volume-22141753-c8c6-4599-a118-5d8a32487c0d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:47.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9549" for this suite.

• [SLOW TEST:8.483 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3838,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:47.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:35:47.310: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:47.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7338" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":233,"skipped":3847,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:47.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  5 22:35:47.710: INFO: Waiting up to 5m0s for pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585" in namespace "emptydir-9405" to be "success or failure"
Feb  5 22:35:47.713: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986137ms
Feb  5 22:35:49.720: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010004509s
Feb  5 22:35:51.725: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015183437s
Feb  5 22:35:53.732: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585": Phase="Pending", Reason="", readiness=false. Elapsed: 6.022147668s
Feb  5 22:35:55.739: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029024428s
STEP: Saw pod success
Feb  5 22:35:55.739: INFO: Pod "pod-c14edf5d-a7a2-4e6a-bb53-d84843453585" satisfied condition "success or failure"
Feb  5 22:35:55.744: INFO: Trying to get logs from node jerma-node pod pod-c14edf5d-a7a2-4e6a-bb53-d84843453585 container test-container: 
STEP: delete the pod
Feb  5 22:35:55.982: INFO: Waiting for pod pod-c14edf5d-a7a2-4e6a-bb53-d84843453585 to disappear
Feb  5 22:35:56.003: INFO: Pod pod-c14edf5d-a7a2-4e6a-bb53-d84843453585 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:35:56.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9405" for this suite.

• [SLOW TEST:8.448 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3848,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:35:56.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb  5 22:36:04.662: INFO: Successfully updated pod "adopt-release-f52hx"
STEP: Checking that the Job readopts the Pod
Feb  5 22:36:04.662: INFO: Waiting up to 15m0s for pod "adopt-release-f52hx" in namespace "job-2967" to be "adopted"
Feb  5 22:36:04.688: INFO: Pod "adopt-release-f52hx": Phase="Running", Reason="", readiness=true. Elapsed: 26.311516ms
Feb  5 22:36:06.695: INFO: Pod "adopt-release-f52hx": Phase="Running", Reason="", readiness=true. Elapsed: 2.032990209s
Feb  5 22:36:06.695: INFO: Pod "adopt-release-f52hx" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb  5 22:36:07.218: INFO: Successfully updated pod "adopt-release-f52hx"
STEP: Checking that the Job releases the Pod
Feb  5 22:36:07.218: INFO: Waiting up to 15m0s for pod "adopt-release-f52hx" in namespace "job-2967" to be "released"
Feb  5 22:36:07.231: INFO: Pod "adopt-release-f52hx": Phase="Running", Reason="", readiness=true. Elapsed: 12.997366ms
Feb  5 22:36:07.232: INFO: Pod "adopt-release-f52hx" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:36:07.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-2967" for this suite.

• [SLOW TEST:11.353 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":235,"skipped":3861,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:36:07.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:36:23.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6629" for this suite.

• [SLOW TEST:16.438 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":236,"skipped":3871,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:36:23.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-4ed5b9af-6a5b-4542-b123-a8e940c9ca40
STEP: Creating a pod to test consume secrets
Feb  5 22:36:23.971: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e" in namespace "projected-5152" to be "success or failure"
Feb  5 22:36:23.993: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.962183ms
Feb  5 22:36:25.998: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026935727s
Feb  5 22:36:28.005: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033316875s
Feb  5 22:36:30.020: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048020109s
Feb  5 22:36:32.027: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055043536s
STEP: Saw pod success
Feb  5 22:36:32.027: INFO: Pod "pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e" satisfied condition "success or failure"
Feb  5 22:36:32.032: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 22:36:32.148: INFO: Waiting for pod pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e to disappear
Feb  5 22:36:32.159: INFO: Pod pod-projected-secrets-92c751fe-0c1e-42dd-a96e-0da4e847515e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:36:32.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5152" for this suite.

• [SLOW TEST:8.357 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3875,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:36:32.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  5 22:36:32.370: INFO: Waiting up to 5m0s for pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf" in namespace "emptydir-7378" to be "success or failure"
Feb  5 22:36:32.400: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.718213ms
Feb  5 22:36:34.408: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038191177s
Feb  5 22:36:36.414: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044368824s
Feb  5 22:36:38.423: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053148734s
Feb  5 22:36:40.952: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.581912488s
STEP: Saw pod success
Feb  5 22:36:40.952: INFO: Pod "pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf" satisfied condition "success or failure"
Feb  5 22:36:40.961: INFO: Trying to get logs from node jerma-node pod pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf container test-container: 
STEP: delete the pod
Feb  5 22:36:41.137: INFO: Waiting for pod pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf to disappear
Feb  5 22:36:41.147: INFO: Pod pod-03b6ebd6-f701-4eab-9cf7-1d81adabc2bf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:36:41.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7378" for this suite.

• [SLOW TEST:8.998 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3876,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:36:41.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  5 22:36:42.404: INFO: Pod name wrapped-volume-race-8a15481b-e8f7-4be4-b818-4d1b6d6b1fb4: Found 0 pods out of 5
Feb  5 22:36:47.440: INFO: Pod name wrapped-volume-race-8a15481b-e8f7-4be4-b818-4d1b6d6b1fb4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8a15481b-e8f7-4be4-b818-4d1b6d6b1fb4 in namespace emptydir-wrapper-7490, will wait for the garbage collector to delete the pods
Feb  5 22:37:13.617: INFO: Deleting ReplicationController wrapped-volume-race-8a15481b-e8f7-4be4-b818-4d1b6d6b1fb4 took: 10.693814ms
Feb  5 22:37:14.118: INFO: Terminating ReplicationController wrapped-volume-race-8a15481b-e8f7-4be4-b818-4d1b6d6b1fb4 pods took: 500.714171ms
STEP: Creating RC which spawns configmap-volume pods
Feb  5 22:37:33.067: INFO: Pod name wrapped-volume-race-374f4201-81e0-484a-a91e-fa6e84ab64d9: Found 0 pods out of 5
Feb  5 22:37:38.090: INFO: Pod name wrapped-volume-race-374f4201-81e0-484a-a91e-fa6e84ab64d9: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-374f4201-81e0-484a-a91e-fa6e84ab64d9 in namespace emptydir-wrapper-7490, will wait for the garbage collector to delete the pods
Feb  5 22:38:08.194: INFO: Deleting ReplicationController wrapped-volume-race-374f4201-81e0-484a-a91e-fa6e84ab64d9 took: 9.340705ms
Feb  5 22:38:08.595: INFO: Terminating ReplicationController wrapped-volume-race-374f4201-81e0-484a-a91e-fa6e84ab64d9 pods took: 401.066575ms
STEP: Creating RC which spawns configmap-volume pods
Feb  5 22:38:22.964: INFO: Pod name wrapped-volume-race-484b9443-710a-4f03-913c-e5d78962c2e7: Found 0 pods out of 5
Feb  5 22:38:27.971: INFO: Pod name wrapped-volume-race-484b9443-710a-4f03-913c-e5d78962c2e7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-484b9443-710a-4f03-913c-e5d78962c2e7 in namespace emptydir-wrapper-7490, will wait for the garbage collector to delete the pods
Feb  5 22:38:56.066: INFO: Deleting ReplicationController wrapped-volume-race-484b9443-710a-4f03-913c-e5d78962c2e7 took: 10.214228ms
Feb  5 22:38:56.567: INFO: Terminating ReplicationController wrapped-volume-race-484b9443-710a-4f03-913c-e5d78962c2e7 pods took: 501.362525ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:39:14.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7490" for this suite.

• [SLOW TEST:153.029 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":239,"skipped":3881,"failed":0}
SSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:39:14.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Feb  5 22:39:14.369: INFO: Waiting up to 5m0s for pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a" in namespace "var-expansion-2365" to be "success or failure"
Feb  5 22:39:14.392: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.894718ms
Feb  5 22:39:16.400: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030809715s
Feb  5 22:39:18.405: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035798775s
Feb  5 22:39:20.413: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044062969s
Feb  5 22:39:22.431: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061562898s
Feb  5 22:39:24.448: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078922473s
Feb  5 22:39:26.456: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.086998895s
STEP: Saw pod success
Feb  5 22:39:26.457: INFO: Pod "var-expansion-781610e1-9830-44bf-bd1f-312910536e5a" satisfied condition "success or failure"
Feb  5 22:39:26.465: INFO: Trying to get logs from node jerma-node pod var-expansion-781610e1-9830-44bf-bd1f-312910536e5a container dapi-container: 
STEP: delete the pod
Feb  5 22:39:26.521: INFO: Waiting for pod var-expansion-781610e1-9830-44bf-bd1f-312910536e5a to disappear
Feb  5 22:39:26.623: INFO: Pod var-expansion-781610e1-9830-44bf-bd1f-312910536e5a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:39:26.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2365" for this suite.

• [SLOW TEST:12.440 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3885,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:39:26.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:39:27.157: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  5 22:39:29.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:39:31.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:39:33.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:39:35.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539167, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:39:38.261: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:39:50.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1264" for this suite.
STEP: Destroying namespace "webhook-1264-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:24.159 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":241,"skipped":3902,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:39:50.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  5 22:39:50.891: INFO: Waiting up to 5m0s for pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6" in namespace "downward-api-8279" to be "success or failure"
Feb  5 22:39:50.905: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.580509ms
Feb  5 22:39:53.369: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.478001392s
Feb  5 22:39:55.378: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.486865687s
Feb  5 22:39:57.385: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.494034909s
Feb  5 22:39:59.421: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529718791s
Feb  5 22:40:01.427: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.536296113s
Feb  5 22:40:03.434: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.542697696s
STEP: Saw pod success
Feb  5 22:40:03.434: INFO: Pod "downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6" satisfied condition "success or failure"
Feb  5 22:40:03.438: INFO: Trying to get logs from node jerma-node pod downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6 container dapi-container: 
STEP: delete the pod
Feb  5 22:40:03.489: INFO: Waiting for pod downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6 to disappear
Feb  5 22:40:03.519: INFO: Pod downward-api-be8139d6-8d1c-4f34-b547-5972ff05cba6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:40:03.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8279" for this suite.

• [SLOW TEST:12.771 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3931,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:40:03.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:40:03.771: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c" in namespace "projected-3190" to be "success or failure"
Feb  5 22:40:03.790: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.114273ms
Feb  5 22:40:05.797: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025935785s
Feb  5 22:40:07.816: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044584853s
Feb  5 22:40:09.826: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05462314s
Feb  5 22:40:11.835: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064038264s
Feb  5 22:40:13.841: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069491s
STEP: Saw pod success
Feb  5 22:40:13.841: INFO: Pod "downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c" satisfied condition "success or failure"
Feb  5 22:40:13.844: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c container client-container: 
STEP: delete the pod
Feb  5 22:40:13.890: INFO: Waiting for pod downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c to disappear
Feb  5 22:40:13.917: INFO: Pod downwardapi-volume-ca0a6af5-b7e1-48f4-9cf6-e3b5c787f31c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:40:13.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3190" for this suite.

• [SLOW TEST:10.363 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3976,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:40:13.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:40:14.550: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  5 22:40:16.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:40:18.588: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:40:20.586: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539214, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:40:23.638: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:40:23.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5184-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:40:25.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7637" for this suite.
STEP: Destroying namespace "webhook-7637-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.509 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":244,"skipped":4028,"failed":0}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:40:25.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-242c1375-a8a2-4d14-9f80-0896e789437a in namespace container-probe-737
Feb  5 22:40:35.691: INFO: Started pod test-webserver-242c1375-a8a2-4d14-9f80-0896e789437a in namespace container-probe-737
STEP: checking the pod's current state and verifying that restartCount is present
Feb  5 22:40:35.694: INFO: Initial restart count of pod test-webserver-242c1375-a8a2-4d14-9f80-0896e789437a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:44:36.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-737" for this suite.

• [SLOW TEST:251.395 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4030,"failed":0}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:44:36.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:44:54.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8400" for this suite.

• [SLOW TEST:17.312 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":246,"skipped":4031,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:44:54.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Feb  5 22:44:54.262: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:45:03.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3389" for this suite.

• [SLOW TEST:9.599 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":247,"skipped":4032,"failed":0}
SSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:45:03.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:45:03.969: INFO: Waiting up to 5m0s for pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3" in namespace "security-context-test-3413" to be "success or failure"
Feb  5 22:45:03.988: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.241814ms
Feb  5 22:45:05.994: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025159522s
Feb  5 22:45:07.999: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029816459s
Feb  5 22:45:10.010: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041139577s
Feb  5 22:45:12.037: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067313865s
Feb  5 22:45:14.048: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078488643s
Feb  5 22:45:14.048: INFO: Pod "busybox-user-65534-efeee9c7-bfc9-4e00-810c-eb1e1e5602c3" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:45:14.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3413" for this suite.

• [SLOW TEST:10.320 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4038,"failed":0}
SSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:45:14.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  5 22:45:20.229: INFO: &Pod{ObjectMeta:{send-events-6d8f6447-65dd-4b62-9c2a-9de30974447a  events-5064 /api/v1/namespaces/events-5064/pods/send-events-6d8f6447-65dd-4b62-9c2a-9de30974447a 922db773-e153-4486-8ca8-a7046fccf3e8 6626353 0 2020-02-05 22:45:14 +0000 UTC   map[name:foo time:194133383] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qsr2k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qsr2k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qsr2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-05 22:45:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:45:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://de873a253bc179f8291ac313ebdeeee12966cd63f4c855dfbfb84e4eaf4d0374,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb  5 22:45:22.243: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  5 22:45:24.250: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:45:24.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5064" for this suite.

• [SLOW TEST:10.256 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":249,"skipped":4041,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:45:24.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:45:25.130: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  5 22:45:27.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:29.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:31.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:33.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539525, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:45:36.182: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:45:36.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2092" for this suite.
STEP: Destroying namespace "webhook-2092-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.056 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":250,"skipped":4049,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:45:36.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:45:36.528: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  5 22:45:41.565: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  5 22:45:45.612: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  5 22:45:47.617: INFO: Creating deployment "test-rollover-deployment"
Feb  5 22:45:47.633: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  5 22:45:49.662: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  5 22:45:49.675: INFO: Ensure that both replica sets have 1 created replica
Feb  5 22:45:49.683: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  5 22:45:49.699: INFO: Updating deployment test-rollover-deployment
Feb  5 22:45:49.699: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  5 22:45:51.718: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  5 22:45:51.729: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  5 22:45:51.740: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:45:51.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539550, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:53.749: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:45:53.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539550, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:55.756: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:45:55.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539550, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:57.751: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:45:57.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539557, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:45:59.753: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:45:59.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539557, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:46:01.753: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:46:01.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539557, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:46:03.793: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:46:03.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539557, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:46:05.896: INFO: all replica sets need to contain the pod-template-hash label
Feb  5 22:46:05.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539557, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539547, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:46:07.753: INFO: 
Feb  5 22:46:07.754: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Feb  5 22:46:07.769: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-702 /apis/apps/v1/namespaces/deployment-702/deployments/test-rollover-deployment 2b3672f9-1117-42f8-9db9-28692ae8d926 6626608 2 2020-02-05 22:45:47 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0057d9a28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-05 22:45:47 +0000 UTC,LastTransitionTime:2020-02-05 22:45:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-05 22:46:07 +0000 UTC,LastTransitionTime:2020-02-05 22:45:47 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb  5 22:46:07.776: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-702 /apis/apps/v1/namespaces/deployment-702/replicasets/test-rollover-deployment-574d6dfbff bdfceaa0-e488-46c5-8338-4c56244e29b7 6626598 2 2020-02-05 22:45:49 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2b3672f9-1117-42f8-9db9-28692ae8d926 0xc00308d177 0xc00308d178}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00308d1e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb  5 22:46:07.776: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  5 22:46:07.776: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-702 /apis/apps/v1/namespaces/deployment-702/replicasets/test-rollover-controller 12a40ac3-6596-4f70-b324-4c6ef10e918e 6626607 2 2020-02-05 22:45:36 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2b3672f9-1117-42f8-9db9-28692ae8d926 0xc00308d0a7 0xc00308d0a8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00308d108  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  5 22:46:07.776: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-702 /apis/apps/v1/namespaces/deployment-702/replicasets/test-rollover-deployment-f6c94f66c 8ebcdb90-984c-448d-a9b6-d1709820acac 6626546 2 2020-02-05 22:45:47 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2b3672f9-1117-42f8-9db9-28692ae8d926 0xc00308d250 0xc00308d251}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00308d2c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb  5 22:46:07.781: INFO: Pod "test-rollover-deployment-574d6dfbff-lwlkx" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-lwlkx test-rollover-deployment-574d6dfbff- deployment-702 /api/v1/namespaces/deployment-702/pods/test-rollover-deployment-574d6dfbff-lwlkx 18f691df-516f-4e50-a6bc-a1bc02c78f94 6626571 0 2020-02-05 22:45:50 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff bdfceaa0-e488-46c5-8338-4c56244e29b7 0xc002b5ef97 0xc002b5ef98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8l224,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8l224,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8l224,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 22:45:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-05 22:45:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-05 22:45:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://b8913f8e806598b336524f11c8dccda60c82e7bfd698d6b30bd4e73d4eba696c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:46:07.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-702" for this suite.

• [SLOW TEST:31.412 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":251,"skipped":4113,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:46:07.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6079
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6079
STEP: Creating statefulset with conflicting port in namespace statefulset-6079
STEP: Waiting until pod test-pod will start running in namespace statefulset-6079
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6079
Feb  5 22:46:22.236: INFO: Observed stateful pod in namespace: statefulset-6079, name: ss-0, uid: 4301cf97-a3f2-43a7-ac1c-af1742ad82a9, status phase: Pending. Waiting for statefulset controller to delete.
Feb  5 22:46:22.309: INFO: Observed stateful pod in namespace: statefulset-6079, name: ss-0, uid: 4301cf97-a3f2-43a7-ac1c-af1742ad82a9, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 22:46:22.327: INFO: Observed stateful pod in namespace: statefulset-6079, name: ss-0, uid: 4301cf97-a3f2-43a7-ac1c-af1742ad82a9, status phase: Failed. Waiting for statefulset controller to delete.
Feb  5 22:46:22.394: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6079
STEP: Removing pod with conflicting port in namespace statefulset-6079
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6079 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Feb  5 22:46:32.512: INFO: Deleting all statefulset in ns statefulset-6079
Feb  5 22:46:32.515: INFO: Scaling statefulset ss to 0
Feb  5 22:46:42.545: INFO: Waiting for statefulset status.replicas updated to 0
Feb  5 22:46:42.549: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:46:42.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6079" for this suite.

• [SLOW TEST:34.799 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":252,"skipped":4126,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:46:42.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-8468
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  5 22:46:42.692: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  5 22:47:23.684: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8468 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 22:47:23.684: INFO: >>> kubeConfig: /root/.kube/config
I0205 22:47:23.727528       9 log.go:172] (0xc001ae58c0) (0xc00172d9a0) Create stream
I0205 22:47:23.727687       9 log.go:172] (0xc001ae58c0) (0xc00172d9a0) Stream added, broadcasting: 1
I0205 22:47:23.732982       9 log.go:172] (0xc001ae58c0) Reply frame received for 1
I0205 22:47:23.733124       9 log.go:172] (0xc001ae58c0) (0xc0026da000) Create stream
I0205 22:47:23.733143       9 log.go:172] (0xc001ae58c0) (0xc0026da000) Stream added, broadcasting: 3
I0205 22:47:23.734981       9 log.go:172] (0xc001ae58c0) Reply frame received for 3
I0205 22:47:23.735013       9 log.go:172] (0xc001ae58c0) (0xc00172dc20) Create stream
I0205 22:47:23.735020       9 log.go:172] (0xc001ae58c0) (0xc00172dc20) Stream added, broadcasting: 5
I0205 22:47:23.736299       9 log.go:172] (0xc001ae58c0) Reply frame received for 5
I0205 22:47:23.892444       9 log.go:172] (0xc001ae58c0) Data frame received for 3
I0205 22:47:23.892751       9 log.go:172] (0xc0026da000) (3) Data frame handling
I0205 22:47:23.892776       9 log.go:172] (0xc0026da000) (3) Data frame sent
I0205 22:47:24.005624       9 log.go:172] (0xc001ae58c0) Data frame received for 1
I0205 22:47:24.005861       9 log.go:172] (0xc00172d9a0) (1) Data frame handling
I0205 22:47:24.005934       9 log.go:172] (0xc00172d9a0) (1) Data frame sent
I0205 22:47:24.011253       9 log.go:172] (0xc001ae58c0) (0xc00172d9a0) Stream removed, broadcasting: 1
I0205 22:47:24.011921       9 log.go:172] (0xc001ae58c0) (0xc0026da000) Stream removed, broadcasting: 3
I0205 22:47:24.012409       9 log.go:172] (0xc001ae58c0) (0xc00172dc20) Stream removed, broadcasting: 5
I0205 22:47:24.012461       9 log.go:172] (0xc001ae58c0) (0xc00172d9a0) Stream removed, broadcasting: 1
I0205 22:47:24.012481       9 log.go:172] (0xc001ae58c0) (0xc0026da000) Stream removed, broadcasting: 3
I0205 22:47:24.012494       9 log.go:172] (0xc001ae58c0) (0xc00172dc20) Stream removed, broadcasting: 5
Feb  5 22:47:24.012: INFO: Found all expected endpoints: [netserver-0]
I0205 22:47:24.013482       9 log.go:172] (0xc001ae58c0) Go away received
Feb  5 22:47:24.020: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8468 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 22:47:24.020: INFO: >>> kubeConfig: /root/.kube/config
I0205 22:47:24.079223       9 log.go:172] (0xc001d3e420) (0xc00117d0e0) Create stream
I0205 22:47:24.079573       9 log.go:172] (0xc001d3e420) (0xc00117d0e0) Stream added, broadcasting: 1
I0205 22:47:24.086052       9 log.go:172] (0xc001d3e420) Reply frame received for 1
I0205 22:47:24.086194       9 log.go:172] (0xc001d3e420) (0xc00117d180) Create stream
I0205 22:47:24.086220       9 log.go:172] (0xc001d3e420) (0xc00117d180) Stream added, broadcasting: 3
I0205 22:47:24.087899       9 log.go:172] (0xc001d3e420) Reply frame received for 3
I0205 22:47:24.087929       9 log.go:172] (0xc001d3e420) (0xc0026da320) Create stream
I0205 22:47:24.087944       9 log.go:172] (0xc001d3e420) (0xc0026da320) Stream added, broadcasting: 5
I0205 22:47:24.089496       9 log.go:172] (0xc001d3e420) Reply frame received for 5
I0205 22:47:24.204186       9 log.go:172] (0xc001d3e420) Data frame received for 3
I0205 22:47:24.204418       9 log.go:172] (0xc00117d180) (3) Data frame handling
I0205 22:47:24.204465       9 log.go:172] (0xc00117d180) (3) Data frame sent
I0205 22:47:24.318982       9 log.go:172] (0xc001d3e420) (0xc00117d180) Stream removed, broadcasting: 3
I0205 22:47:24.319275       9 log.go:172] (0xc001d3e420) Data frame received for 1
I0205 22:47:24.319289       9 log.go:172] (0xc00117d0e0) (1) Data frame handling
I0205 22:47:24.319297       9 log.go:172] (0xc00117d0e0) (1) Data frame sent
I0205 22:47:24.319303       9 log.go:172] (0xc001d3e420) (0xc00117d0e0) Stream removed, broadcasting: 1
I0205 22:47:24.319455       9 log.go:172] (0xc001d3e420) (0xc0026da320) Stream removed, broadcasting: 5
I0205 22:47:24.319472       9 log.go:172] (0xc001d3e420) (0xc00117d0e0) Stream removed, broadcasting: 1
I0205 22:47:24.319478       9 log.go:172] (0xc001d3e420) (0xc00117d180) Stream removed, broadcasting: 3
I0205 22:47:24.319483       9 log.go:172] (0xc001d3e420) (0xc0026da320) Stream removed, broadcasting: 5
Feb  5 22:47:24.319: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:47:24.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0205 22:47:24.320848       9 log.go:172] (0xc001d3e420) Go away received
STEP: Destroying namespace "pod-network-test-8468" for this suite.

• [SLOW TEST:41.736 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4169,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:47:24.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-51e0fa26-632e-4630-aa55-7d860435773c
STEP: Creating a pod to test consume secrets
Feb  5 22:47:24.567: INFO: Waiting up to 5m0s for pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676" in namespace "secrets-4906" to be "success or failure"
Feb  5 22:47:24.577: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 9.551269ms
Feb  5 22:47:26.776: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208832118s
Feb  5 22:47:28.784: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216852619s
Feb  5 22:47:31.004: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436632299s
Feb  5 22:47:33.364: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 8.796499872s
Feb  5 22:47:35.382: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 10.814076948s
Feb  5 22:47:37.389: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Pending", Reason="", readiness=false. Elapsed: 12.82190801s
Feb  5 22:47:39.395: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.828026129s
STEP: Saw pod success
Feb  5 22:47:39.396: INFO: Pod "pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676" satisfied condition "success or failure"
Feb  5 22:47:39.401: INFO: Trying to get logs from node jerma-node pod pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676 container secret-volume-test: 
STEP: delete the pod
Feb  5 22:47:39.518: INFO: Waiting for pod pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676 to disappear
Feb  5 22:47:39.526: INFO: Pod pod-secrets-f355ac17-8a7c-4c13-9c69-8cbde5bdc676 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:47:39.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4906" for this suite.

• [SLOW TEST:15.203 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4184,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:47:39.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:47:39.694: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718" in namespace "projected-8076" to be "success or failure"
Feb  5 22:47:39.789: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718": Phase="Pending", Reason="", readiness=false. Elapsed: 94.809122ms
Feb  5 22:47:41.800: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10515243s
Feb  5 22:47:43.812: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117112323s
Feb  5 22:47:45.819: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124038305s
Feb  5 22:47:47.828: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133835168s
STEP: Saw pod success
Feb  5 22:47:47.829: INFO: Pod "downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718" satisfied condition "success or failure"
Feb  5 22:47:47.834: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718 container client-container: 
STEP: delete the pod
Feb  5 22:47:47.926: INFO: Waiting for pod downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718 to disappear
Feb  5 22:47:47.941: INFO: Pod downwardapi-volume-06232baa-51f3-4a08-b014-5b7637557718 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:47:47.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8076" for this suite.

• [SLOW TEST:8.419 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4195,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:47:47.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Feb  5 22:47:48.086: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:47:48.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7601" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":256,"skipped":4243,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:47:48.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:47:48.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:47:56.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2570" for this suite.

• [SLOW TEST:8.154 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4281,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:47:56.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  5 22:47:56.614: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627171 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 22:47:56.615: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627171 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  5 22:48:06.629: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627202 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  5 22:48:06.630: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627202 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  5 22:48:16.650: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627226 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 22:48:16.652: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627226 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  5 22:48:26.665: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627250 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  5 22:48:26.665: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-a 9531daed-99e1-4e4d-a9f5-ac22ca1af8fd 6627250 0 2020-02-05 22:47:56 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  5 22:48:36.688: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-b e3d0a50a-a605-4cd5-b0dc-ba24e36e697b 6627276 0 2020-02-05 22:48:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 22:48:36.688: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-b e3d0a50a-a605-4cd5-b0dc-ba24e36e697b 6627276 0 2020-02-05 22:48:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  5 22:48:47.473: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-b e3d0a50a-a605-4cd5-b0dc-ba24e36e697b 6627303 0 2020-02-05 22:48:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  5 22:48:47.473: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7232 /api/v1/namespaces/watch-7232/configmaps/e2e-watch-test-configmap-b e3d0a50a-a605-4cd5-b0dc-ba24e36e697b 6627303 0 2020-02-05 22:48:36 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:48:57.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7232" for this suite.

• [SLOW TEST:61.119 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":258,"skipped":4298,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:48:57.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:48:57.588: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12" in namespace "downward-api-3816" to be "success or failure"
Feb  5 22:48:57.611: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12": Phase="Pending", Reason="", readiness=false. Elapsed: 21.938879ms
Feb  5 22:48:59.617: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028396596s
Feb  5 22:49:01.624: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035734357s
Feb  5 22:49:03.631: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042290236s
Feb  5 22:49:05.640: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050911524s
STEP: Saw pod success
Feb  5 22:49:05.640: INFO: Pod "downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12" satisfied condition "success or failure"
Feb  5 22:49:05.645: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12 container client-container: 
STEP: delete the pod
Feb  5 22:49:05.741: INFO: Waiting for pod downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12 to disappear
Feb  5 22:49:05.744: INFO: Pod downwardapi-volume-a03ac7cc-25bf-47d4-8e22-d81827b67e12 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:49:05.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3816" for this suite.

• [SLOW TEST:8.270 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4347,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:49:05.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-6179
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  5 22:49:05.926: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  5 22:49:42.102: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 22:49:42.103: INFO: >>> kubeConfig: /root/.kube/config
I0205 22:49:42.149163       9 log.go:172] (0xc0021822c0) (0xc00117c960) Create stream
I0205 22:49:42.149264       9 log.go:172] (0xc0021822c0) (0xc00117c960) Stream added, broadcasting: 1
I0205 22:49:42.152290       9 log.go:172] (0xc0021822c0) Reply frame received for 1
I0205 22:49:42.152319       9 log.go:172] (0xc0021822c0) (0xc0026da8c0) Create stream
I0205 22:49:42.152327       9 log.go:172] (0xc0021822c0) (0xc0026da8c0) Stream added, broadcasting: 3
I0205 22:49:42.154044       9 log.go:172] (0xc0021822c0) Reply frame received for 3
I0205 22:49:42.154068       9 log.go:172] (0xc0021822c0) (0xc00172d9a0) Create stream
I0205 22:49:42.154078       9 log.go:172] (0xc0021822c0) (0xc00172d9a0) Stream added, broadcasting: 5
I0205 22:49:42.155925       9 log.go:172] (0xc0021822c0) Reply frame received for 5
I0205 22:49:42.257964       9 log.go:172] (0xc0021822c0) Data frame received for 3
I0205 22:49:42.258110       9 log.go:172] (0xc0026da8c0) (3) Data frame handling
I0205 22:49:42.258145       9 log.go:172] (0xc0026da8c0) (3) Data frame sent
I0205 22:49:42.355779       9 log.go:172] (0xc0021822c0) Data frame received for 1
I0205 22:49:42.356082       9 log.go:172] (0xc0021822c0) (0xc00172d9a0) Stream removed, broadcasting: 5
I0205 22:49:42.356167       9 log.go:172] (0xc00117c960) (1) Data frame handling
I0205 22:49:42.356282       9 log.go:172] (0xc00117c960) (1) Data frame sent
I0205 22:49:42.356298       9 log.go:172] (0xc0021822c0) (0xc0026da8c0) Stream removed, broadcasting: 3
I0205 22:49:42.356332       9 log.go:172] (0xc0021822c0) (0xc00117c960) Stream removed, broadcasting: 1
I0205 22:49:42.356347       9 log.go:172] (0xc0021822c0) Go away received
I0205 22:49:42.356897       9 log.go:172] (0xc0021822c0) (0xc00117c960) Stream removed, broadcasting: 1
I0205 22:49:42.356935       9 log.go:172] (0xc0021822c0) (0xc0026da8c0) Stream removed, broadcasting: 3
I0205 22:49:42.356942       9 log.go:172] (0xc0021822c0) (0xc00172d9a0) Stream removed, broadcasting: 5
Feb  5 22:49:42.357: INFO: Waiting for responses: map[]
Feb  5 22:49:42.363: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  5 22:49:42.363: INFO: >>> kubeConfig: /root/.kube/config
I0205 22:49:42.400211       9 log.go:172] (0xc001d3ea50) (0xc0026db0e0) Create stream
I0205 22:49:42.400342       9 log.go:172] (0xc001d3ea50) (0xc0026db0e0) Stream added, broadcasting: 1
I0205 22:49:42.403264       9 log.go:172] (0xc001d3ea50) Reply frame received for 1
I0205 22:49:42.403304       9 log.go:172] (0xc001d3ea50) (0xc001594140) Create stream
I0205 22:49:42.403315       9 log.go:172] (0xc001d3ea50) (0xc001594140) Stream added, broadcasting: 3
I0205 22:49:42.405747       9 log.go:172] (0xc001d3ea50) Reply frame received for 3
I0205 22:49:42.405765       9 log.go:172] (0xc001d3ea50) (0xc00172dc20) Create stream
I0205 22:49:42.405772       9 log.go:172] (0xc001d3ea50) (0xc00172dc20) Stream added, broadcasting: 5
I0205 22:49:42.407130       9 log.go:172] (0xc001d3ea50) Reply frame received for 5
I0205 22:49:42.490481       9 log.go:172] (0xc001d3ea50) Data frame received for 3
I0205 22:49:42.490735       9 log.go:172] (0xc001594140) (3) Data frame handling
I0205 22:49:42.490753       9 log.go:172] (0xc001594140) (3) Data frame sent
I0205 22:49:42.622903       9 log.go:172] (0xc001d3ea50) Data frame received for 1
I0205 22:49:42.623063       9 log.go:172] (0xc001d3ea50) (0xc00172dc20) Stream removed, broadcasting: 5
I0205 22:49:42.623116       9 log.go:172] (0xc0026db0e0) (1) Data frame handling
I0205 22:49:42.623124       9 log.go:172] (0xc0026db0e0) (1) Data frame sent
I0205 22:49:42.623138       9 log.go:172] (0xc001d3ea50) (0xc001594140) Stream removed, broadcasting: 3
I0205 22:49:42.623233       9 log.go:172] (0xc001d3ea50) (0xc0026db0e0) Stream removed, broadcasting: 1
I0205 22:49:42.623249       9 log.go:172] (0xc001d3ea50) Go away received
I0205 22:49:42.623879       9 log.go:172] (0xc001d3ea50) (0xc0026db0e0) Stream removed, broadcasting: 1
I0205 22:49:42.623901       9 log.go:172] (0xc001d3ea50) (0xc001594140) Stream removed, broadcasting: 3
I0205 22:49:42.623913       9 log.go:172] (0xc001d3ea50) (0xc00172dc20) Stream removed, broadcasting: 5
Feb  5 22:49:42.624: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:49:42.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6179" for this suite.

• [SLOW TEST:36.878 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4378,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:49:42.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Feb  5 22:49:42.833: INFO: namespace kubectl-206
Feb  5 22:49:42.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-206'
Feb  5 22:49:44.983: INFO: stderr: ""
Feb  5 22:49:44.983: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb  5 22:49:46.018: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:46.018: INFO: Found 0 / 1
Feb  5 22:49:46.991: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:46.991: INFO: Found 0 / 1
Feb  5 22:49:48.000: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:48.001: INFO: Found 0 / 1
Feb  5 22:49:49.172: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:49.172: INFO: Found 0 / 1
Feb  5 22:49:50.182: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:50.183: INFO: Found 0 / 1
Feb  5 22:49:50.990: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:50.990: INFO: Found 0 / 1
Feb  5 22:49:52.444: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:52.445: INFO: Found 0 / 1
Feb  5 22:49:53.439: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:53.439: INFO: Found 0 / 1
Feb  5 22:49:53.992: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:53.993: INFO: Found 0 / 1
Feb  5 22:49:54.990: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:54.990: INFO: Found 0 / 1
Feb  5 22:49:56.057: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:56.057: INFO: Found 0 / 1
Feb  5 22:49:56.995: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:56.996: INFO: Found 0 / 1
Feb  5 22:49:57.988: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:57.988: INFO: Found 0 / 1
Feb  5 22:49:58.992: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:58.993: INFO: Found 1 / 1
Feb  5 22:49:58.993: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  5 22:49:58.999: INFO: Selector matched 1 pods for map[app:agnhost]
Feb  5 22:49:58.999: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  5 22:49:58.999: INFO: wait on agnhost-master startup in kubectl-206 
Feb  5 22:49:58.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-228cw agnhost-master --namespace=kubectl-206'
Feb  5 22:49:59.237: INFO: stderr: ""
Feb  5 22:49:59.238: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb  5 22:49:59.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-206'
Feb  5 22:49:59.431: INFO: stderr: ""
Feb  5 22:49:59.431: INFO: stdout: "service/rm2 exposed\n"
Feb  5 22:49:59.438: INFO: Service rm2 in namespace kubectl-206 found.
STEP: exposing service
Feb  5 22:50:01.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-206'
Feb  5 22:50:01.807: INFO: stderr: ""
Feb  5 22:50:01.807: INFO: stdout: "service/rm3 exposed\n"
Feb  5 22:50:01.821: INFO: Service rm3 in namespace kubectl-206 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:50:03.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-206" for this suite.

• [SLOW TEST:21.217 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":261,"skipped":4381,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:50:03.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Feb  5 22:50:04.078: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  5 22:50:04.091: INFO: Waiting for terminating namespaces to be deleted...
Feb  5 22:50:04.094: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb  5 22:50:04.102: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.102: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:50:04.102: INFO: agnhost-master-228cw from kubectl-206 started at 2020-02-05 22:49:45 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.102: INFO: 	Container agnhost-master ready: true, restart count 0
Feb  5 22:50:04.102: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb  5 22:50:04.102: INFO: 	Container weave ready: true, restart count 1
Feb  5 22:50:04.102: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:50:04.102: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb  5 22:50:04.128: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container kube-controller-manager ready: true, restart count 3
Feb  5 22:50:04.128: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  5 22:50:04.128: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container weave ready: true, restart count 0
Feb  5 22:50:04.128: INFO: 	Container weave-npc ready: true, restart count 0
Feb  5 22:50:04.128: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container kube-scheduler ready: true, restart count 5
Feb  5 22:50:04.128: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb  5 22:50:04.128: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container etcd ready: true, restart count 1
Feb  5 22:50:04.128: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container coredns ready: true, restart count 0
Feb  5 22:50:04.128: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb  5 22:50:04.128: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8ff5f762-4893-4c2f-b436-d652cdd5396a 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8ff5f762-4893-4c2f-b436-d652cdd5396a off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8ff5f762-4893-4c2f-b436-d652cdd5396a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:50:36.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6361" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:32.617 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":262,"skipped":4383,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:50:36.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  5 22:50:48.925: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:50:49.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6339" for this suite.

• [SLOW TEST:12.545 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4383,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:50:49.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Feb  5 22:50:49.142: INFO: Waiting up to 5m0s for pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1" in namespace "downward-api-3783" to be "success or failure"
Feb  5 22:50:49.152: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.830024ms
Feb  5 22:50:51.162: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019163294s
Feb  5 22:50:53.167: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024447552s
Feb  5 22:50:55.172: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029867349s
Feb  5 22:50:57.202: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059834126s
STEP: Saw pod success
Feb  5 22:50:57.202: INFO: Pod "downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1" satisfied condition "success or failure"
Feb  5 22:50:57.207: INFO: Trying to get logs from node jerma-node pod downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1 container dapi-container: 
STEP: delete the pod
Feb  5 22:50:57.265: INFO: Waiting for pod downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1 to disappear
Feb  5 22:50:57.279: INFO: Pod downward-api-87d436fd-fb8f-4857-8a24-1e06560c88e1 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:50:57.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3783" for this suite.

• [SLOW TEST:8.269 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4387,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:50:57.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-d7dbc2ec-b192-4266-b18b-358f41fa538f
STEP: Creating a pod to test consume secrets
Feb  5 22:50:57.407: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed" in namespace "projected-9755" to be "success or failure"
Feb  5 22:50:57.422: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed": Phase="Pending", Reason="", readiness=false. Elapsed: 15.190491ms
Feb  5 22:50:59.431: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024187609s
Feb  5 22:51:01.438: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03118806s
Feb  5 22:51:03.447: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039997792s
Feb  5 22:51:05.455: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048357473s
STEP: Saw pod success
Feb  5 22:51:05.455: INFO: Pod "pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed" satisfied condition "success or failure"
Feb  5 22:51:05.462: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed container projected-secret-volume-test: 
STEP: delete the pod
Feb  5 22:51:05.545: INFO: Waiting for pod pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed to disappear
Feb  5 22:51:05.549: INFO: Pod pod-projected-secrets-73ed51b1-8cff-44ff-a680-cdbf578478ed no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:05.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9755" for this suite.

• [SLOW TEST:8.328 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4401,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:05.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:51:06.812: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb  5 22:51:08.830: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:10.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:12.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539866, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:51:15.912: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:16.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6492" for this suite.
STEP: Destroying namespace "webhook-6492-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.710 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":266,"skipped":4408,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:16.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-03b91edc-b326-4051-92b5-59f34f903693
STEP: Creating a pod to test consume configMaps
Feb  5 22:51:16.427: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2" in namespace "projected-4930" to be "success or failure"
Feb  5 22:51:16.432: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41867ms
Feb  5 22:51:18.445: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017373492s
Feb  5 22:51:20.455: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027565141s
Feb  5 22:51:22.464: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03667047s
Feb  5 22:51:24.475: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048256697s
Feb  5 22:51:26.486: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058670033s
STEP: Saw pod success
Feb  5 22:51:26.486: INFO: Pod "pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2" satisfied condition "success or failure"
Feb  5 22:51:26.490: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  5 22:51:26.570: INFO: Waiting for pod pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2 to disappear
Feb  5 22:51:26.576: INFO: Pod pod-projected-configmaps-0ad20061-7055-4e78-9c84-e35f63e989b2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:26.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4930" for this suite.

• [SLOW TEST:10.260 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4419,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:26.590: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:51:26.729: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42" in namespace "projected-9724" to be "success or failure"
Feb  5 22:51:26.777: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42": Phase="Pending", Reason="", readiness=false. Elapsed: 47.385562ms
Feb  5 22:51:28.784: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054204527s
Feb  5 22:51:30.789: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058970892s
Feb  5 22:51:32.793: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06381637s
Feb  5 22:51:34.850: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120406442s
STEP: Saw pod success
Feb  5 22:51:34.850: INFO: Pod "downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42" satisfied condition "success or failure"
Feb  5 22:51:34.855: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42 container client-container: 
STEP: delete the pod
Feb  5 22:51:34.913: INFO: Waiting for pod downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42 to disappear
Feb  5 22:51:34.918: INFO: Pod downwardapi-volume-7d7b7e20-f0cc-4a4a-83ee-703db4f14a42 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:34.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9724" for this suite.

• [SLOW TEST:8.347 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4424,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:34.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb  5 22:51:36.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-crd-conversion-webhook-deployment-78dcf5dd84\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:38.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:40.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:42.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  5 22:51:44.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716539896, loc:(*time.Location)(0x7d100a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb  5 22:51:47.722: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:51:47.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3564" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:14.447 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":269,"skipped":4442,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:49.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Feb  5 22:51:49.596: INFO: Waiting up to 5m0s for pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5" in namespace "containers-2387" to be "success or failure"
Feb  5 22:51:49.614: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.920627ms
Feb  5 22:51:51.622: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025792955s
Feb  5 22:51:53.632: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036268592s
Feb  5 22:51:55.641: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04474573s
Feb  5 22:51:57.648: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052026362s
Feb  5 22:51:59.655: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059362191s
STEP: Saw pod success
Feb  5 22:51:59.655: INFO: Pod "client-containers-30b6f891-73ec-404d-96af-646095c7a7e5" satisfied condition "success or failure"
Feb  5 22:51:59.662: INFO: Trying to get logs from node jerma-node pod client-containers-30b6f891-73ec-404d-96af-646095c7a7e5 container test-container: 
STEP: delete the pod
Feb  5 22:51:59.802: INFO: Waiting for pod client-containers-30b6f891-73ec-404d-96af-646095c7a7e5 to disappear
Feb  5 22:51:59.882: INFO: Pod client-containers-30b6f891-73ec-404d-96af-646095c7a7e5 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:51:59.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2387" for this suite.

• [SLOW TEST:10.515 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4444,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:51:59.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  5 22:52:00.028: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  5 22:52:05.090: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:52:05.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1365" for this suite.

• [SLOW TEST:5.348 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":271,"skipped":4457,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:52:05.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:52:05.425: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560" in namespace "security-context-test-7362" to be "success or failure"
Feb  5 22:52:05.446: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 20.591495ms
Feb  5 22:52:07.453: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027528796s
Feb  5 22:52:09.460: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033712686s
Feb  5 22:52:11.468: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041685705s
Feb  5 22:52:13.475: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049256799s
Feb  5 22:52:15.482: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Pending", Reason="", readiness=false. Elapsed: 10.055720498s
Feb  5 22:52:17.490: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.064361941s
Feb  5 22:52:17.490: INFO: Pod "alpine-nnp-false-007f3b48-87c7-430a-9958-a5211fa9d560" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:52:17.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7362" for this suite.

• [SLOW TEST:12.273 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4461,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:52:17.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:52:17.639: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 22.144932ms)
Feb  5 22:52:17.646: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.375093ms)
Feb  5 22:52:17.655: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.790496ms)
Feb  5 22:52:17.663: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.405048ms)
Feb  5 22:52:17.669: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.30236ms)
Feb  5 22:52:17.674: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.497671ms)
Feb  5 22:52:17.679: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.62132ms)
Feb  5 22:52:17.714: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 34.929992ms)
Feb  5 22:52:17.720: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.318938ms)
Feb  5 22:52:17.724: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.482235ms)
Feb  5 22:52:17.728: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.973295ms)
Feb  5 22:52:17.731: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.669243ms)
Feb  5 22:52:17.735: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.413185ms)
Feb  5 22:52:17.739: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.662248ms)
Feb  5 22:52:17.742: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.851019ms)
Feb  5 22:52:17.747: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.219618ms)
Feb  5 22:52:17.751: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.922025ms)
Feb  5 22:52:17.754: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.182618ms)
Feb  5 22:52:17.758: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.71907ms)
Feb  5 22:52:17.761: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.14065ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:52:17.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5233" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":273,"skipped":4465,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:52:17.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Feb  5 22:52:17.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d" in namespace "projected-1132" to be "success or failure"
Feb  5 22:52:17.984: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 61.891411ms
Feb  5 22:52:20.016: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094280109s
Feb  5 22:52:22.030: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10842303s
Feb  5 22:52:24.042: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120072991s
Feb  5 22:52:26.047: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125025892s
STEP: Saw pod success
Feb  5 22:52:26.047: INFO: Pod "downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d" satisfied condition "success or failure"
Feb  5 22:52:26.050: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d container client-container: 
STEP: delete the pod
Feb  5 22:52:26.137: INFO: Waiting for pod downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d to disappear
Feb  5 22:52:26.167: INFO: Pod downwardapi-volume-d54a6fca-e120-43c3-a54b-76dc01c02b3d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:52:26.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1132" for this suite.

• [SLOW TEST:8.447 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4466,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:52:26.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  5 22:52:26.479: INFO: Waiting up to 5m0s for pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc" in namespace "emptydir-8120" to be "success or failure"
Feb  5 22:52:26.495: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Pending", Reason="", readiness=false. Elapsed: 15.109048ms
Feb  5 22:52:28.517: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037168935s
Feb  5 22:52:30.523: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043580529s
Feb  5 22:52:32.547: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067332633s
Feb  5 22:52:34.557: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076815907s
Feb  5 22:52:36.566: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086117081s
STEP: Saw pod success
Feb  5 22:52:36.566: INFO: Pod "pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc" satisfied condition "success or failure"
Feb  5 22:52:36.572: INFO: Trying to get logs from node jerma-node pod pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc container test-container: 
STEP: delete the pod
Feb  5 22:52:36.652: INFO: Waiting for pod pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc to disappear
Feb  5 22:52:36.706: INFO: Pod pod-1bcffdac-c9dc-42ad-b8cd-cb38f14e66bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:52:36.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8120" for this suite.

• [SLOW TEST:10.503 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4482,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:52:36.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Feb  5 22:52:36.851: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb  5 22:52:36.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:37.394: INFO: stderr: ""
Feb  5 22:52:37.394: INFO: stdout: "service/agnhost-slave created\n"
Feb  5 22:52:37.395: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb  5 22:52:37.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:37.764: INFO: stderr: ""
Feb  5 22:52:37.764: INFO: stdout: "service/agnhost-master created\n"
Feb  5 22:52:37.765: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  5 22:52:37.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:38.192: INFO: stderr: ""
Feb  5 22:52:38.192: INFO: stdout: "service/frontend created\n"
Feb  5 22:52:38.192: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb  5 22:52:38.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:38.570: INFO: stderr: ""
Feb  5 22:52:38.570: INFO: stdout: "deployment.apps/frontend created\n"
Feb  5 22:52:38.571: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  5 22:52:38.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:38.976: INFO: stderr: ""
Feb  5 22:52:38.976: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb  5 22:52:38.977: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  5 22:52:38.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-229'
Feb  5 22:52:41.091: INFO: stderr: ""
Feb  5 22:52:41.091: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb  5 22:52:41.091: INFO: Waiting for all frontend pods to be Running.
Feb  5 22:53:01.143: INFO: Waiting for frontend to serve content.
Feb  5 22:53:01.189: INFO: Trying to add a new entry to the guestbook.
Feb  5 22:53:01.208: INFO: Verifying that added entry can be retrieved.
Feb  5 22:53:01.226: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Feb  5 22:53:06.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:06.456: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:06.456: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  5 22:53:06.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:06.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:06.767: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  5 22:53:06.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:06.968: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:06.968: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  5 22:53:06.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:07.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:07.118: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  5 22:53:07.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:07.300: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:07.300: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  5 22:53:07.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-229'
Feb  5 22:53:07.644: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  5 22:53:07.644: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:53:07.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-229" for this suite.

• [SLOW TEST:31.041 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:385
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":276,"skipped":4491,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:53:07.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-42db1b07-f374-47aa-ad31-eeff462d004a
STEP: Creating a pod to test consume secrets
Feb  5 22:53:10.023: INFO: Waiting up to 5m0s for pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8" in namespace "secrets-4266" to be "success or failure"
Feb  5 22:53:10.073: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 50.121158ms
Feb  5 22:53:12.284: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260329808s
Feb  5 22:53:14.331: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307236393s
Feb  5 22:53:16.339: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.315629987s
Feb  5 22:53:18.346: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32285262s
Feb  5 22:53:20.352: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.328691979s
Feb  5 22:53:22.362: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.338226052s
Feb  5 22:53:24.367: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.344081721s
STEP: Saw pod success
Feb  5 22:53:24.368: INFO: Pod "pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8" satisfied condition "success or failure"
Feb  5 22:53:24.371: INFO: Trying to get logs from node jerma-node pod pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8 container secret-volume-test: 
STEP: delete the pod
Feb  5 22:53:24.498: INFO: Waiting for pod pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8 to disappear
Feb  5 22:53:24.507: INFO: Pod pod-secrets-5c946841-7456-4293-b17b-7f4026aa04f8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:53:24.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4266" for this suite.

• [SLOW TEST:16.758 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4532,"failed":0}
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Feb  5 22:53:24.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Feb  5 22:53:24.666: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  5 22:53:27.897: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Feb  5 22:53:28.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9069" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":278,"skipped":4532,"failed":0}
SSSSFeb  5 22:53:28.406: INFO: Running AfterSuite actions on all nodes
Feb  5 22:53:28.406: INFO: Running AfterSuite actions on node 1
Feb  5 22:53:28.406: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4536,"failed":0}

Ran 278 of 4814 Specs in 6284.509 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4536 Skipped
PASS