I0323 23:36:12.986685 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0323 23:36:12.986875 7 e2e.go:124] Starting e2e run "a59f3c36-28e9-4a60-9975-a3d03ff1cc12" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585006571 - Will randomize all specs Will run 275 of 4992 specs Mar 23 23:36:13.039: INFO: >>> kubeConfig: /root/.kube/config Mar 23 23:36:13.046: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 23 23:36:13.071: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 23 23:36:13.107: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 23 23:36:13.107: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 23 23:36:13.107: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 23 23:36:13.117: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 23 23:36:13.117: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 23 23:36:13.117: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Mar 23 23:36:13.118: INFO: kube-apiserver version: v1.17.0 Mar 23 23:36:13.118: INFO: >>> kubeConfig: /root/.kube/config Mar 23 23:36:13.124: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:13.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota Mar 23 23:36:13.184: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:36:29.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8172" for this suite. • [SLOW TEST:16.199 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:29.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 23:36:30.083: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 23:36:32.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603390, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603390, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603390, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603390, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 23:36:35.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:36:35.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9022" for this suite. STEP: Destroying namespace "webhook-9022-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.019 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":2,"skipped":51,"failed":0} SSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:35.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-1585 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1585 to expose endpoints map[] Mar 23 23:36:35.476: INFO: Get endpoints failed (57.781464ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 23 23:36:36.480: INFO: successfully validated that service multi-endpoint-test in namespace services-1585 exposes endpoints map[] (1.061816886s elapsed) STEP: Creating pod pod1 in namespace services-1585 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1585 to expose endpoints map[pod1:[100]] Mar 23 23:36:39.528: INFO: successfully validated that service multi-endpoint-test in namespace services-1585 exposes endpoints map[pod1:[100]] (3.041145643s elapsed) STEP: Creating pod pod2 in namespace services-1585 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1585 to expose endpoints map[pod1:[100] pod2:[101]] Mar 23 23:36:42.636: INFO: successfully validated that service multi-endpoint-test in namespace services-1585 exposes endpoints map[pod1:[100] pod2:[101]] (3.104524333s elapsed) STEP: Deleting pod pod1 in namespace services-1585 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1585 to expose endpoints map[pod2:[101]] Mar 23 23:36:43.687: INFO: successfully validated that service multi-endpoint-test in namespace services-1585 exposes endpoints map[pod2:[101]] (1.047213926s elapsed) STEP: Deleting pod pod2 in namespace services-1585 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1585 to expose endpoints map[] Mar 23 23:36:44.730: INFO: successfully validated that service multi-endpoint-test in namespace services-1585 exposes endpoints map[] (1.037444261s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:36:44.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1585" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:9.451 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":3,"skipped":54,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:44.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:36:44.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89" in namespace "downward-api-6682" to be "Succeeded or Failed" Mar 23 23:36:44.879: INFO: Pod "downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191777ms Mar 23 23:36:46.902: INFO: Pod "downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027619784s Mar 23 23:36:48.908: INFO: Pod "downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033324845s STEP: Saw pod success Mar 23 23:36:48.908: INFO: Pod "downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89" satisfied condition "Succeeded or Failed" Mar 23 23:36:48.911: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89 container client-container: STEP: delete the pod Mar 23 23:36:48.940: INFO: Waiting for pod downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89 to disappear Mar 23 23:36:48.945: INFO: Pod downwardapi-volume-01b50605-e745-4dd5-9bfd-1c6b2307ab89 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:36:48.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6682" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":56,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:48.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 23 23:36:53.642: INFO: Successfully updated pod "annotationupdate71c018e8-4fb8-434d-8826-d7615b63a965" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:36:55.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4286" for this suite. • [SLOW TEST:6.753 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":71,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:36:55.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:37:55.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5979" for this suite. • [SLOW TEST:60.080 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":78,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:37:55.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-98b33999-cfea-47e8-97ce-8301b08b36e7 STEP: Creating a pod to test consume secrets Mar 23 23:37:55.878: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa" in namespace "projected-3015" to be "Succeeded or Failed" Mar 23 23:37:55.882: INFO: Pod "pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785302ms Mar 23 23:37:57.914: INFO: Pod "pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03625826s Mar 23 23:37:59.919: INFO: Pod "pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040636501s STEP: Saw pod success Mar 23 23:37:59.919: INFO: Pod "pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa" satisfied condition "Succeeded or Failed" Mar 23 23:37:59.922: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa container projected-secret-volume-test: STEP: delete the pod Mar 23 23:37:59.993: INFO: Waiting for pod pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa to disappear Mar 23 23:37:59.995: INFO: Pod pod-projected-secrets-061a3e54-1368-4528-bf63-fdd19e7546fa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:37:59.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3015" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":97,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:00.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 23 23:38:00.062: INFO: Waiting up to 5m0s for pod "pod-9a551b8e-c435-4734-b5ec-a034727360d1" in namespace "emptydir-6732" to be "Succeeded or Failed" Mar 23 23:38:00.066: INFO: Pod "pod-9a551b8e-c435-4734-b5ec-a034727360d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910048ms Mar 23 23:38:02.069: INFO: Pod "pod-9a551b8e-c435-4734-b5ec-a034727360d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007012299s Mar 23 23:38:04.084: INFO: Pod "pod-9a551b8e-c435-4734-b5ec-a034727360d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022284381s STEP: Saw pod success Mar 23 23:38:04.084: INFO: Pod "pod-9a551b8e-c435-4734-b5ec-a034727360d1" satisfied condition "Succeeded or Failed" Mar 23 23:38:04.087: INFO: Trying to get logs from node latest-worker pod pod-9a551b8e-c435-4734-b5ec-a034727360d1 container test-container: STEP: delete the pod Mar 23 23:38:04.104: INFO: Waiting for pod pod-9a551b8e-c435-4734-b5ec-a034727360d1 to disappear Mar 23 23:38:04.138: INFO: Pod pod-9a551b8e-c435-4734-b5ec-a034727360d1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:04.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":118,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:04.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:08.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3917" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":120,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:08.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:12.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1019" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":122,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:12.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 23 23:38:12.411: INFO: Waiting up to 5m0s for pod "pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f" in namespace "emptydir-9154" to be "Succeeded or Failed" Mar 23 23:38:12.422: INFO: Pod "pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.382142ms Mar 23 23:38:14.426: INFO: Pod "pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014761261s Mar 23 23:38:16.430: INFO: Pod "pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018744284s STEP: Saw pod success Mar 23 23:38:16.430: INFO: Pod "pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f" satisfied condition "Succeeded or Failed" Mar 23 23:38:16.433: INFO: Trying to get logs from node latest-worker pod pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f container test-container: STEP: delete the pod Mar 23 23:38:16.470: INFO: Waiting for pod pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f to disappear Mar 23 23:38:16.482: INFO: Pod pod-71510fd0-1d9f-4db3-ad93-ea618c9dc92f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:16.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9154" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:16.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 23 23:38:16.551: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:33.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4047" for this suite. • [SLOW TEST:16.790 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":12,"skipped":160,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:33.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-b4cec14b-56d2-4933-92a5-bfca7a854acd STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b4cec14b-56d2-4933-92a5-bfca7a854acd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:39.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-798" for this suite. • [SLOW TEST:6.108 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:39.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 23 23:38:39.462: INFO: Waiting up to 5m0s for pod "pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8" in namespace "emptydir-4430" to be "Succeeded or Failed" Mar 23 23:38:39.470: INFO: Pod "pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.306031ms Mar 23 23:38:41.473: INFO: Pod "pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011097584s Mar 23 23:38:43.478: INFO: Pod "pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015327375s STEP: Saw pod success Mar 23 23:38:43.478: INFO: Pod "pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8" satisfied condition "Succeeded or Failed" Mar 23 23:38:43.481: INFO: Trying to get logs from node latest-worker2 pod pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8 container test-container: STEP: delete the pod Mar 23 23:38:43.517: INFO: Waiting for pod pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8 to disappear Mar 23 23:38:43.536: INFO: Pod pod-ab32ea81-f7e9-469f-b3dc-6048688dedc8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:43.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4430" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:43.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-39ce5d47-0163-45d7-9399-798b9c238cb3 STEP: Creating a pod to test consume configMaps Mar 23 23:38:43.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e" in namespace "configmap-4202" to be "Succeeded or Failed" Mar 23 23:38:43.650: INFO: Pod "pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098196ms Mar 23 23:38:45.654: INFO: Pod "pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008509726s Mar 23 23:38:47.658: INFO: Pod "pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012108872s STEP: Saw pod success Mar 23 23:38:47.658: INFO: Pod "pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e" satisfied condition "Succeeded or Failed" Mar 23 23:38:47.661: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e container configmap-volume-test: STEP: delete the pod Mar 23 23:38:47.783: INFO: Waiting for pod pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e to disappear Mar 23 23:38:47.898: INFO: Pod pod-configmaps-c616dc48-9236-4d8f-b468-35272fc1723e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:47.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4202" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:47.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-2nsq4 in namespace proxy-4945 I0323 23:38:47.989927 7 runners.go:190] Created replication controller with name: proxy-service-2nsq4, namespace: proxy-4945, replica count: 1 I0323 23:38:49.040455 7 runners.go:190] proxy-service-2nsq4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:38:50.040723 7 runners.go:190] proxy-service-2nsq4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:38:51.041013 7 runners.go:190] proxy-service-2nsq4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0323 23:38:52.041285 7 runners.go:190] proxy-service-2nsq4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 23:38:52.044: INFO: setup took 4.085725626s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 23 23:38:52.050: INFO: (0) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 5.847012ms) Mar 23 23:38:52.050: INFO: (0) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 6.109ms) Mar 23 23:38:52.050: INFO: (0) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 6.131555ms) Mar 23 23:38:52.050: INFO: (0) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 6.178357ms) Mar 23 23:38:52.051: INFO: (0) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 7.135141ms) Mar 23 23:38:52.052: INFO: (0) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 7.626682ms) Mar 23 23:38:52.053: INFO: (0) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 9.17494ms) Mar 23 23:38:52.054: INFO: (0) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 9.698682ms) Mar 23 23:38:52.054: INFO: (0) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 9.661521ms) Mar 23 23:38:52.055: INFO: (0) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 10.527647ms) Mar 23 23:38:52.056: INFO: (0) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 11.439415ms) Mar 23 23:38:52.059: INFO: (0) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 14.584529ms) Mar 23 23:38:52.059: INFO: (0) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 14.703421ms) Mar 23 23:38:52.059: INFO: (0) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 14.771014ms) Mar 23 23:38:52.059: INFO: (0) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 14.744015ms) Mar 23 23:38:52.062: INFO: (0) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 3.053816ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.297787ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.306343ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 3.650511ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 3.581309ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 3.640311ms) Mar 23 23:38:52.066: INFO: (1) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.676272ms) Mar 23 23:38:52.068: INFO: (1) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: ... (200; 9.031722ms) Mar 23 23:38:52.072: INFO: (1) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 9.068445ms) Mar 23 23:38:52.072: INFO: (1) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 9.042954ms) Mar 23 23:38:52.073: INFO: (1) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 10.526259ms) Mar 23 23:38:52.073: INFO: (1) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 10.706826ms) Mar 23 23:38:52.076: INFO: (2) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 3.070352ms) Mar 23 23:38:52.077: INFO: (2) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.550308ms) Mar 23 23:38:52.077: INFO: (2) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 3.993692ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.156623ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.1233ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 4.256128ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.301306ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.424438ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.340879ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.635958ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 4.677123ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.663866ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.980731ms) Mar 23 23:38:52.078: INFO: (2) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 5.065357ms) Mar 23 23:38:52.079: INFO: (2) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 5.084542ms) Mar 23 23:38:52.079: INFO: (2) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test<... (200; 2.626025ms) Mar 23 23:38:52.081: INFO: (3) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 2.714459ms) Mar 23 23:38:52.083: INFO: (3) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.373368ms) Mar 23 23:38:52.083: INFO: (3) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.423151ms) Mar 23 23:38:52.083: INFO: (3) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 5.426472ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 5.495349ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 5.99265ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 6.069735ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 5.429763ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 5.964171ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 6.005659ms) Mar 23 23:38:52.085: INFO: (3) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 5.884018ms) Mar 23 23:38:52.086: INFO: (3) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 6.18372ms) Mar 23 23:38:52.089: INFO: (4) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 3.164282ms) Mar 23 23:38:52.089: INFO: (4) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 3.178458ms) Mar 23 23:38:52.090: INFO: (4) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.147455ms) Mar 23 23:38:52.091: INFO: (4) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.500173ms) Mar 23 23:38:52.091: INFO: (4) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.676851ms) Mar 23 23:38:52.091: INFO: (4) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.804338ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 5.597109ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 5.603634ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 5.59847ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 5.662456ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 5.689748ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 5.71779ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 5.699428ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 5.796811ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 5.753498ms) Mar 23 23:38:52.092: INFO: (4) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 3.503121ms) Mar 23 23:38:52.096: INFO: (5) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test<... (200; 3.922669ms) Mar 23 23:38:52.096: INFO: (5) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.982338ms) Mar 23 23:38:52.096: INFO: (5) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.125584ms) Mar 23 23:38:52.096: INFO: (5) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.200868ms) Mar 23 23:38:52.096: INFO: (5) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.221071ms) Mar 23 23:38:52.097: INFO: (5) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.602237ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 5.615969ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 5.813159ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 5.826526ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 5.872269ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 5.794546ms) Mar 23 23:38:52.098: INFO: (5) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 5.912686ms) Mar 23 23:38:52.100: INFO: (6) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: ... (200; 5.101331ms) Mar 23 23:38:52.103: INFO: (6) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 5.071152ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 7.378861ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 7.47127ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 7.612752ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 7.792298ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 7.895573ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 7.921351ms) Mar 23 23:38:52.106: INFO: (6) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 8.020185ms) Mar 23 23:38:52.107: INFO: (6) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 8.56915ms) Mar 23 23:38:52.107: INFO: (6) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 8.592131ms) Mar 23 23:38:52.110: INFO: (7) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.642694ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 4.127324ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.119741ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.113832ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.135871ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.201738ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 4.143062ms) Mar 23 23:38:52.111: INFO: (7) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test<... (200; 4.64836ms) Mar 23 23:38:52.112: INFO: (7) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 5.100058ms) Mar 23 23:38:52.112: INFO: (7) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 5.277025ms) Mar 23 23:38:52.112: INFO: (7) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 5.207669ms) Mar 23 23:38:52.112: INFO: (7) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 5.270137ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 2.999771ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 2.954682ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.127213ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: ... (200; 3.216207ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.176752ms) Mar 23 23:38:52.115: INFO: (8) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 3.188129ms) Mar 23 23:38:52.116: INFO: (8) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 3.417774ms) Mar 23 23:38:52.116: INFO: (8) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 3.502729ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.383841ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.596521ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.750957ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.625378ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.605235ms) Mar 23 23:38:52.117: INFO: (8) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.937529ms) Mar 23 23:38:52.120: INFO: (9) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 3.01553ms) Mar 23 23:38:52.121: INFO: (9) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.280031ms) Mar 23 23:38:52.121: INFO: (9) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.607651ms) Mar 23 23:38:52.121: INFO: (9) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.740068ms) Mar 23 23:38:52.122: INFO: (9) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 4.437781ms) Mar 23 23:38:52.122: INFO: (9) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 4.515907ms) Mar 23 23:38:52.122: INFO: (9) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.574156ms) Mar 23 23:38:52.122: INFO: (9) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.525099ms) Mar 23 23:38:52.122: INFO: (9) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 3.694256ms) Mar 23 23:38:52.127: INFO: (10) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.774787ms) Mar 23 23:38:52.127: INFO: (10) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.70141ms) Mar 23 23:38:52.127: INFO: (10) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 3.739823ms) Mar 23 23:38:52.127: INFO: (10) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.694264ms) Mar 23 23:38:52.127: INFO: (10) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 3.777059ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.720973ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.756762ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.874174ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.880256ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.865544ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.962484ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.965189ms) Mar 23 23:38:52.128: INFO: (10) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.893931ms) Mar 23 23:38:52.132: INFO: (11) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.729648ms) Mar 23 23:38:52.132: INFO: (11) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.691519ms) Mar 23 23:38:52.132: INFO: (11) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.828828ms) Mar 23 23:38:52.132: INFO: (11) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 3.74652ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 4.442536ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 4.631946ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.618031ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.621947ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 4.532476ms) Mar 23 23:38:52.133: INFO: (11) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test<... (200; 3.847137ms) Mar 23 23:38:52.139: INFO: (12) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.064382ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 4.438592ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.598267ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.573231ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.657451ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.637515ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.625557ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.946368ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.966615ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.93334ms) Mar 23 23:38:52.140: INFO: (12) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.944372ms) Mar 23 23:38:52.141: INFO: (12) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 6.13669ms) Mar 23 23:38:52.141: INFO: (12) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 6.149761ms) Mar 23 23:38:52.155: INFO: (13) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 13.684222ms) Mar 23 23:38:52.156: INFO: (13) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 14.42972ms) Mar 23 23:38:52.156: INFO: (13) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 14.445972ms) Mar 23 23:38:52.156: INFO: (13) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 14.47653ms) Mar 23 23:38:52.156: INFO: (13) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 14.614771ms) Mar 23 23:38:52.156: INFO: (13) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 14.700126ms) Mar 23 23:38:52.157: INFO: (13) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 15.002097ms) Mar 23 23:38:52.157: INFO: (13) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 15.079851ms) Mar 23 23:38:52.157: INFO: (13) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 15.066143ms) Mar 23 23:38:52.157: INFO: (13) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 15.169032ms) Mar 23 23:38:52.157: INFO: (13) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 15.104522ms) Mar 23 23:38:52.159: INFO: (14) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 2.400738ms) Mar 23 23:38:52.159: INFO: (14) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 3.754714ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.203586ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.180127ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.14692ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.179026ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.256744ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.309727ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.385152ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.420361ms) Mar 23 23:38:52.161: INFO: (14) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.392991ms) Mar 23 23:38:52.164: INFO: (15) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 2.461897ms) Mar 23 23:38:52.164: INFO: (15) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 2.743106ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 3.057922ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 3.369573ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 3.417953ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 3.513174ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 3.607586ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 3.650784ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 3.650339ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.638303ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 3.595132ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 3.738477ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 3.824227ms) Mar 23 23:38:52.165: INFO: (15) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: ... (200; 2.203009ms) Mar 23 23:38:52.168: INFO: (16) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 2.642267ms) Mar 23 23:38:52.168: INFO: (16) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 2.591832ms) Mar 23 23:38:52.168: INFO: (16) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 2.666763ms) Mar 23 23:38:52.168: INFO: (16) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 2.73148ms) Mar 23 23:38:52.168: INFO: (16) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 2.97788ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.216063ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 3.328522ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 3.776427ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 3.901ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 3.920056ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 3.939215ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 3.905047ms) Mar 23 23:38:52.169: INFO: (16) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 3.929179ms) Mar 23 23:38:52.172: INFO: (17) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 2.616565ms) Mar 23 23:38:52.173: INFO: (17) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: ... (200; 3.688036ms) Mar 23 23:38:52.173: INFO: (17) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 3.655193ms) Mar 23 23:38:52.173: INFO: (17) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 3.647851ms) Mar 23 23:38:52.173: INFO: (17) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.007273ms) Mar 23 23:38:52.173: INFO: (17) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.001999ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.539499ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 4.490385ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.531228ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.517463ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.601233ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.554495ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 4.608544ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.56848ms) Mar 23 23:38:52.174: INFO: (17) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.692877ms) Mar 23 23:38:52.177: INFO: (18) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 3.016627ms) Mar 23 23:38:52.177: INFO: (18) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test (200; 4.4155ms) Mar 23 23:38:52.178: INFO: (18) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.419645ms) Mar 23 23:38:52.178: INFO: (18) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:1080/proxy/: test<... (200; 4.431933ms) Mar 23 23:38:52.178: INFO: (18) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 4.467265ms) Mar 23 23:38:52.179: INFO: (18) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.436436ms) Mar 23 23:38:52.178: INFO: (18) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.422931ms) Mar 23 23:38:52.179: INFO: (18) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.498756ms) Mar 23 23:38:52.179: INFO: (18) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.424338ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:443/proxy/: test<... (200; 4.365261ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5/proxy/: test (200; 4.354051ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:462/proxy/: tls qux (200; 4.357349ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:1080/proxy/: ... (200; 4.393134ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.393638ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/http:proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.395376ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:162/proxy/: bar (200; 4.455773ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/proxy-service-2nsq4-xdhn5:160/proxy/: foo (200; 4.426335ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname1/proxy/: foo (200; 4.509462ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/pods/https:proxy-service-2nsq4-xdhn5:460/proxy/: tls baz (200; 4.452878ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname1/proxy/: tls baz (200; 4.532339ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname2/proxy/: bar (200; 4.472464ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/proxy-service-2nsq4:portname2/proxy/: bar (200; 4.498577ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/http:proxy-service-2nsq4:portname1/proxy/: foo (200; 4.533911ms) Mar 23 23:38:52.183: INFO: (19) /api/v1/namespaces/proxy-4945/services/https:proxy-service-2nsq4:tlsportname2/proxy/: tls qux (200; 4.560844ms) STEP: deleting ReplicationController proxy-service-2nsq4 in namespace proxy-4945, will wait for the garbage collector to delete the pods Mar 23 23:38:52.242: INFO: Deleting ReplicationController proxy-service-2nsq4 took: 7.047113ms Mar 23 23:38:52.542: INFO: Terminating ReplicationController proxy-service-2nsq4 pods took: 300.380229ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:38:54.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4945" for this suite. • [SLOW TEST:6.972 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":16,"skipped":279,"failed":0} [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:38:54.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:38:54.918: INFO: Creating deployment "webserver-deployment" Mar 23 23:38:54.931: INFO: Waiting for observed generation 1 Mar 23 23:38:56.994: INFO: Waiting for all required pods to come up Mar 23 23:38:56.999: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 23 23:39:05.008: INFO: Waiting for deployment "webserver-deployment" to complete Mar 23 23:39:05.014: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 23 23:39:05.020: INFO: Updating deployment webserver-deployment Mar 23 23:39:05.020: INFO: Waiting for observed generation 2 Mar 23 23:39:07.046: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 23 23:39:07.050: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 23 23:39:07.052: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 23 23:39:07.058: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 23 23:39:07.058: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 23 23:39:07.060: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 23 23:39:07.063: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 23 23:39:07.063: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 23 23:39:07.068: INFO: Updating deployment webserver-deployment Mar 23 23:39:07.068: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 23 23:39:07.191: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 23 23:39:07.253: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 23 23:39:07.511: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1650 /apis/apps/v1/namespaces/deployment-1650/deployments/webserver-deployment 93cd436a-5394-4696-8794-a9ba5ef59b43 2267773 3 2020-03-23 23:38:54 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00448e0a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-23 23:39:05 +0000 UTC,LastTransitionTime:2020-03-23 23:38:54 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-23 23:39:07 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 23 23:39:07.568: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-1650 /apis/apps/v1/namespaces/deployment-1650/replicasets/webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 2267830 3 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 93cd436a-5394-4696-8794-a9ba5ef59b43 0xc00448e837 0xc00448e838}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00448e8a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 23:39:07.568: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 23 23:39:07.568: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-1650 /apis/apps/v1/namespaces/deployment-1650/replicasets/webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 2267813 3 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 93cd436a-5394-4696-8794-a9ba5ef59b43 0xc00448e737 0xc00448e738}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00448e7c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 23 23:39:07.668: INFO: Pod "webserver-deployment-595b5b9587-2shhj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2shhj webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-2shhj 2f38b6d2-aa54-4ecd-ab71-5214e526f218 2267807 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448efd7 0xc00448efd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.668: INFO: Pod "webserver-deployment-595b5b9587-7kvvk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7kvvk webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-7kvvk 07ba4263-4583-4972-9686-7cf21c0da287 2267659 0 2020-03-23 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448f127 0xc00448f128}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.245,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:39:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b5f9ad80879c103d075c1103025d0f1dee5d6467f0562ea8ac70b4f354aa3f8a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.245,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.668: INFO: Pod "webserver-deployment-595b5b9587-8zj79" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8zj79 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-8zj79 2aa526a0-25e7-431f-9156-5ed52ec4d4e1 2267673 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448f307 0xc00448f308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.103,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:39:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://08ecece8c4b36d50fef92cc0224a337f7e8d495f64bde8c01b788a6c0bcf008f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.668: INFO: Pod "webserver-deployment-595b5b9587-c4fg6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c4fg6 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-c4fg6 511ae057-aad2-42ca-b083-af4d19571acd 2267806 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448f597 0xc00448f598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.668: INFO: Pod "webserver-deployment-595b5b9587-d8hkk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d8hkk webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-d8hkk 4a7d4d30-7aac-48c0-b13e-cebf49db37db 2267803 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448f777 0xc00448f778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.669: INFO: Pod "webserver-deployment-595b5b9587-fbxfd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fbxfd webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-fbxfd 02deae43-9263-4313-9f88-090b4c4772b6 2267636 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448f917 0xc00448f918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.242,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:38:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3c5b61b7571fa37a1083bc9f49bf40844436b5c155980f7ea2a77e44cb163640,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.669: INFO: Pod "webserver-deployment-595b5b9587-fps5f" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fps5f webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-fps5f 21d4469d-1d40-495d-a3a2-fc6c8a986087 2267654 0 2020-03-23 23:38:55 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448fc67 0xc00448fc68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.244,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:39:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a72dabbdb3f0e15f21b2c0c8871a34fd9bef71ddc7c5d01a76c100af873b647a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.244,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.669: INFO: Pod "webserver-deployment-595b5b9587-g47hl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g47hl webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-g47hl 8e819ff1-b0f7-43d9-ac8b-4a6e6fe89579 2267834 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00448ff87 0xc00448ff88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-23 23:39:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.669: INFO: Pod "webserver-deployment-595b5b9587-h7c4l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7c4l webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-h7c4l 1db1a558-250c-49d9-b4cf-cf21e250361f 2267815 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450c197 0xc00450c198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-kdwxb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kdwxb webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-kdwxb 1c9ae378-9dee-474f-ae3a-142a4fc0426f 2267792 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450c307 0xc00450c308}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-kt4pq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kt4pq webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-kt4pq d2787d2f-5213-42f1-9b1a-a1bcff796a09 2267796 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450c477 0xc00450c478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-lj2d6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lj2d6 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-lj2d6 c079b099-7463-48f6-9e2b-2e1bdf070679 2267663 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450c6d7 0xc00450c6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.104,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:39:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://462a3eea4782203216bb9f8fd2509db5c6f904f86f40ce3f48ee74efe77d88f8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.104,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-ptxzg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ptxzg webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-ptxzg 971ed6bf-ac59-4c79-a0b8-95b9154774be 2267795 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450c917 0xc00450c918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-pzv66" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pzv66 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-pzv66 e7353827-dff0-4d51-86e6-a73aad923368 2267798 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450cab7 0xc00450cab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-23 23:39:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-qq5wr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qq5wr webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-qq5wr 79584a24-0921-4144-a909-8930ece4867f 2267776 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450cd37 0xc00450cd38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.670: INFO: Pod "webserver-deployment-595b5b9587-t4rf2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t4rf2 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-t4rf2 8e5babb2-0486-4091-8842-e6e116e531ed 2267693 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450cf27 0xc00450cf28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.105,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:39:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://79102192b4c03360edb10699cce55c1f4909a6985781b2aad7d0a617ee149f4a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.105,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-595b5b9587-vh727" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vh727 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-vh727 889e189f-0dae-45c4-b8df-ab40d1365141 2267630 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450d257 0xc00450d258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.243,StartTime:2020-03-23 23:38:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:38:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b72d632072d1080cb26bb2de533b011f68aed5e107bd732534465b104cf37d07,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-595b5b9587-w9wbd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w9wbd webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-w9wbd 9d58c521-138d-46f1-be8a-628c4baf1f06 2267797 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450d547 0xc00450d548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-595b5b9587-wn556" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wn556 webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-wn556 40d3f1c2-c3cc-4d92-ae8a-76446d031914 2267816 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450d717 0xc00450d718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-595b5b9587-xjrrz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xjrrz webserver-deployment-595b5b9587- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-595b5b9587-xjrrz 1f7ab886-89ab-4bcd-a76f-8b2258bad1dd 2267618 0 2020-03-23 23:38:54 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 5769d074-35d5-481d-9451-f4f48da3419f 0xc00450d977 0xc00450d978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:38:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.241,StartTime:2020-03-23 23:38:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:38:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://779ab79ea7f9184cf25f8a907601e6d3eda8a2c5819ea59601e9270627849381,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.241,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-c7997dcc8-2vt4j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2vt4j webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-2vt4j 2b6c2c33-015c-40f1-b75b-24b887116f3d 2267802 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc00450dc27 0xc00450dc28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-c7997dcc8-692zw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-692zw webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-692zw 69a3556a-b3aa-4e32-8391-f99d5ef4244c 2267746 0 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc00450de67 0xc00450de68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-23 23:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.671: INFO: Pod "webserver-deployment-c7997dcc8-c7lsb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-c7lsb webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-c7lsb 5c0c0b2d-e6c7-4b52-b829-b79f07382023 2267838 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572067 0xc004572068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-23 23:39:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-gkxh6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gkxh6 webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-gkxh6 5cb5b11d-3223-4f0c-8ccc-123da643580b 2267719 0 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572277 0xc004572278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-23 23:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-gw9w8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gw9w8 webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-gw9w8 c23d4610-3c50-40e4-a488-ddeb6c6f4245 2267722 0 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572477 0xc004572478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-23 23:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-khtrt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-khtrt webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-khtrt afeb42b5-20a2-4c39-ac70-262ee2fa7a1f 2267804 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572767 0xc004572768}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-lpfcl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lpfcl webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-lpfcl f2095e82-5f3c-43ef-b2ed-6c353e2b5e53 2267791 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572907 0xc004572908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-pzd8f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pzd8f webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-pzd8f 2b7d68c0-75bd-4e23-a726-62d7575a7673 2267801 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572ab7 0xc004572ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-q7dbj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q7dbj webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-q7dbj 932f636e-8cc0-4f2f-a217-c44a0a2b2c12 2267749 0 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572c77 0xc004572c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-23 23:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-vnvfv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vnvfv webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-vnvfv 9128d9f0-f1c0-44ca-b70c-31e2f72bad9e 2267817 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004572f97 0xc004572f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.672: INFO: Pod "webserver-deployment-c7997dcc8-wsv7f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wsv7f webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-wsv7f e353ba3a-5e8d-4959-b9a2-0999173586d2 2267732 0 2020-03-23 23:39:05 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004573237 0xc004573238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-23 23:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.673: INFO: Pod "webserver-deployment-c7997dcc8-xswtc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xswtc webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-xswtc 0d846351-909b-4631-a430-02e5da435273 2267800 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc0045733f7 0xc0045733f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:07.673: INFO: Pod "webserver-deployment-c7997dcc8-zpbgf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zpbgf webserver-deployment-c7997dcc8- deployment-1650 /api/v1/namespaces/deployment-1650/pods/webserver-deployment-c7997dcc8-zpbgf bbfbbe79-c5c6-4b89-9c4b-4e4cd6efab0f 2267828 0 2020-03-23 23:39:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 131723d9-e308-47ab-88e6-6d17bb5e9ade 0xc004573667 0xc004573668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72rmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72rmc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72rmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:39:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-23 23:39:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:07.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1650" for this suite. • [SLOW TEST:13.061 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":17,"skipped":279,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:07.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-ac5664bf-3522-4a22-a298-cb862e9936c3 STEP: Creating a pod to test consume secrets Mar 23 23:39:08.332: INFO: Waiting up to 5m0s for pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d" in namespace "secrets-8712" to be "Succeeded or Failed" Mar 23 23:39:08.337: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.612322ms Mar 23 23:39:10.340: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008736869s Mar 23 23:39:12.392: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059958702s Mar 23 23:39:14.446: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114201214s Mar 23 23:39:16.869: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537354796s Mar 23 23:39:19.047: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.715524838s Mar 23 23:39:21.083: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.751507825s Mar 23 23:39:23.212: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.880199163s STEP: Saw pod success Mar 23 23:39:23.212: INFO: Pod "pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d" satisfied condition "Succeeded or Failed" Mar 23 23:39:23.264: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d container secret-volume-test: STEP: delete the pod Mar 23 23:39:24.099: INFO: Waiting for pod pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d to disappear Mar 23 23:39:24.306: INFO: Pod pod-secrets-f0fa8ce3-86a4-47e4-a392-8fed9367968d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:24.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8712" for this suite. STEP: Destroying namespace "secret-namespace-8304" for this suite. • [SLOW TEST:17.145 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":291,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:25.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:39:25.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27" in namespace "projected-8861" to be "Succeeded or Failed" Mar 23 23:39:25.502: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27": Phase="Pending", Reason="", readiness=false. Elapsed: 200.546226ms Mar 23 23:39:27.507: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205880106s Mar 23 23:39:29.556: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27": Phase="Running", Reason="", readiness=true. Elapsed: 4.255158675s Mar 23 23:39:31.561: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27": Phase="Running", Reason="", readiness=true. Elapsed: 6.259634979s Mar 23 23:39:33.564: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.263216538s STEP: Saw pod success Mar 23 23:39:33.564: INFO: Pod "downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27" satisfied condition "Succeeded or Failed" Mar 23 23:39:33.567: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27 container client-container: STEP: delete the pod Mar 23 23:39:33.615: INFO: Waiting for pod downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27 to disappear Mar 23 23:39:33.626: INFO: Pod downwardapi-volume-b24f0ad1-906f-4815-89e8-f02fab7bba27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:33.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8861" for this suite. • [SLOW TEST:8.549 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:33.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 23 23:39:41.759: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 23:39:41.802: INFO: Pod pod-with-prestop-exec-hook still exists Mar 23 23:39:43.803: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 23:39:43.807: INFO: Pod pod-with-prestop-exec-hook still exists Mar 23 23:39:45.803: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 23 23:39:45.807: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:45.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7555" for this suite. • [SLOW TEST:12.188 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":330,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:45.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e494526f-0a81-457b-b8a0-633ff0832c77 STEP: Creating a pod to test consume secrets Mar 23 23:39:45.903: INFO: Waiting up to 5m0s for pod "pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3" in namespace "secrets-4635" to be "Succeeded or Failed" Mar 23 23:39:45.909: INFO: Pod "pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047954ms Mar 23 23:39:47.928: INFO: Pod "pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024581708s Mar 23 23:39:49.932: INFO: Pod "pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028713282s STEP: Saw pod success Mar 23 23:39:49.932: INFO: Pod "pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3" satisfied condition "Succeeded or Failed" Mar 23 23:39:49.935: INFO: Trying to get logs from node latest-worker pod pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3 container secret-env-test: STEP: delete the pod Mar 23 23:39:49.953: INFO: Waiting for pod pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3 to disappear Mar 23 23:39:49.957: INFO: Pod pod-secrets-7338b9dc-0abc-4324-a4dd-2fdd1de724c3 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:49.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4635" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:49.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 23 23:39:50.013: INFO: Created pod &Pod{ObjectMeta:{dns-8167 dns-8167 /api/v1/namespaces/dns-8167/pods/dns-8167 27122860-feed-49c9-bc8d-b4cdd3af4279 2268325 0 2020-03-23 23:39:50 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dk58q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dk58q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dk58q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 23 23:39:50.042: INFO: The status of Pod dns-8167 is Pending, waiting for it to be Running (with Ready = true) Mar 23 23:39:52.046: INFO: The status of Pod dns-8167 is Pending, waiting for it to be Running (with Ready = true) Mar 23 23:39:54.047: INFO: The status of Pod dns-8167 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 23 23:39:54.047: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8167 PodName:dns-8167 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:39:54.047: INFO: >>> kubeConfig: /root/.kube/config I0323 23:39:54.082171 7 log.go:172] (0xc0024d24d0) (0xc0028be500) Create stream I0323 23:39:54.082193 7 log.go:172] (0xc0024d24d0) (0xc0028be500) Stream added, broadcasting: 1 I0323 23:39:54.084059 7 log.go:172] (0xc0024d24d0) Reply frame received for 1 I0323 23:39:54.084108 7 log.go:172] (0xc0024d24d0) (0xc0022bf400) Create stream I0323 23:39:54.084121 7 log.go:172] (0xc0024d24d0) (0xc0022bf400) Stream added, broadcasting: 3 I0323 23:39:54.084924 7 log.go:172] (0xc0024d24d0) Reply frame received for 3 I0323 23:39:54.084961 7 log.go:172] (0xc0024d24d0) (0xc0029463c0) Create stream I0323 23:39:54.084974 7 log.go:172] (0xc0024d24d0) (0xc0029463c0) Stream added, broadcasting: 5 I0323 23:39:54.085883 7 log.go:172] (0xc0024d24d0) Reply frame received for 5 I0323 23:39:54.172407 7 log.go:172] (0xc0024d24d0) Data frame received for 3 I0323 23:39:54.172436 7 log.go:172] (0xc0022bf400) (3) Data frame handling I0323 23:39:54.172447 7 log.go:172] (0xc0022bf400) (3) Data frame sent I0323 23:39:54.172955 7 log.go:172] (0xc0024d24d0) Data frame received for 5 I0323 23:39:54.172983 7 log.go:172] (0xc0029463c0) (5) Data frame handling I0323 23:39:54.173009 7 log.go:172] (0xc0024d24d0) Data frame received for 3 I0323 23:39:54.173023 7 log.go:172] (0xc0022bf400) (3) Data frame handling I0323 23:39:54.174213 7 log.go:172] (0xc0024d24d0) Data frame received for 1 I0323 23:39:54.174229 7 log.go:172] (0xc0028be500) (1) Data frame handling I0323 23:39:54.174243 7 log.go:172] (0xc0028be500) (1) Data frame sent I0323 23:39:54.174263 7 log.go:172] (0xc0024d24d0) (0xc0028be500) Stream removed, broadcasting: 1 I0323 23:39:54.174279 7 log.go:172] (0xc0024d24d0) Go away received I0323 23:39:54.174598 7 log.go:172] (0xc0024d24d0) (0xc0028be500) Stream removed, broadcasting: 1 I0323 23:39:54.174618 7 log.go:172] (0xc0024d24d0) (0xc0022bf400) Stream removed, broadcasting: 3 I0323 23:39:54.174632 7 log.go:172] (0xc0024d24d0) (0xc0029463c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 23 23:39:54.174: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8167 PodName:dns-8167 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:39:54.174: INFO: >>> kubeConfig: /root/.kube/config I0323 23:39:54.200039 7 log.go:172] (0xc002567c30) (0xc0022bf720) Create stream I0323 23:39:54.200057 7 log.go:172] (0xc002567c30) (0xc0022bf720) Stream added, broadcasting: 1 I0323 23:39:54.202233 7 log.go:172] (0xc002567c30) Reply frame received for 1 I0323 23:39:54.202268 7 log.go:172] (0xc002567c30) (0xc002a6c6e0) Create stream I0323 23:39:54.202278 7 log.go:172] (0xc002567c30) (0xc002a6c6e0) Stream added, broadcasting: 3 I0323 23:39:54.203070 7 log.go:172] (0xc002567c30) Reply frame received for 3 I0323 23:39:54.203105 7 log.go:172] (0xc002567c30) (0xc002946460) Create stream I0323 23:39:54.203124 7 log.go:172] (0xc002567c30) (0xc002946460) Stream added, broadcasting: 5 I0323 23:39:54.204077 7 log.go:172] (0xc002567c30) Reply frame received for 5 I0323 23:39:54.286216 7 log.go:172] (0xc002567c30) Data frame received for 3 I0323 23:39:54.286262 7 log.go:172] (0xc002a6c6e0) (3) Data frame handling I0323 23:39:54.286314 7 log.go:172] (0xc002a6c6e0) (3) Data frame sent I0323 23:39:54.286592 7 log.go:172] (0xc002567c30) Data frame received for 3 I0323 23:39:54.286632 7 log.go:172] (0xc002a6c6e0) (3) Data frame handling I0323 23:39:54.286669 7 log.go:172] (0xc002567c30) Data frame received for 5 I0323 23:39:54.286692 7 log.go:172] (0xc002946460) (5) Data frame handling I0323 23:39:54.288420 7 log.go:172] (0xc002567c30) Data frame received for 1 I0323 23:39:54.288447 7 log.go:172] (0xc0022bf720) (1) Data frame handling I0323 23:39:54.288480 7 log.go:172] (0xc0022bf720) (1) Data frame sent I0323 23:39:54.288502 7 log.go:172] (0xc002567c30) (0xc0022bf720) Stream removed, broadcasting: 1 I0323 23:39:54.288522 7 log.go:172] (0xc002567c30) Go away received I0323 23:39:54.288666 7 log.go:172] (0xc002567c30) (0xc0022bf720) Stream removed, broadcasting: 1 I0323 23:39:54.288684 7 log.go:172] (0xc002567c30) (0xc002a6c6e0) Stream removed, broadcasting: 3 I0323 23:39:54.288697 7 log.go:172] (0xc002567c30) (0xc002946460) Stream removed, broadcasting: 5 Mar 23 23:39:54.288: INFO: Deleting pod dns-8167... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:39:54.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8167" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":22,"skipped":365,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:39:54.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 23 23:39:55.691: INFO: Pod name wrapped-volume-race-3d3e8094-ecaa-410e-b1c1-4f8248167a0b: Found 0 pods out of 5 Mar 23 23:40:00.697: INFO: Pod name wrapped-volume-race-3d3e8094-ecaa-410e-b1c1-4f8248167a0b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3d3e8094-ecaa-410e-b1c1-4f8248167a0b in namespace emptydir-wrapper-9551, will wait for the garbage collector to delete the pods Mar 23 23:40:12.774: INFO: Deleting ReplicationController wrapped-volume-race-3d3e8094-ecaa-410e-b1c1-4f8248167a0b took: 6.643648ms Mar 23 23:40:13.174: INFO: Terminating ReplicationController wrapped-volume-race-3d3e8094-ecaa-410e-b1c1-4f8248167a0b pods took: 400.349538ms STEP: Creating RC which spawns configmap-volume pods Mar 23 23:40:23.200: INFO: Pod name wrapped-volume-race-e2466b06-7e45-418e-8301-d2f64b42883a: Found 0 pods out of 5 Mar 23 23:40:28.207: INFO: Pod name wrapped-volume-race-e2466b06-7e45-418e-8301-d2f64b42883a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-e2466b06-7e45-418e-8301-d2f64b42883a in namespace emptydir-wrapper-9551, will wait for the garbage collector to delete the pods Mar 23 23:40:42.284: INFO: Deleting ReplicationController wrapped-volume-race-e2466b06-7e45-418e-8301-d2f64b42883a took: 6.003498ms Mar 23 23:40:42.584: INFO: Terminating ReplicationController wrapped-volume-race-e2466b06-7e45-418e-8301-d2f64b42883a pods took: 300.209579ms STEP: Creating RC which spawns configmap-volume pods Mar 23 23:40:54.113: INFO: Pod name wrapped-volume-race-4f6380e5-a9cc-449c-8fb0-3c85c1bc075f: Found 0 pods out of 5 Mar 23 23:40:59.120: INFO: Pod name wrapped-volume-race-4f6380e5-a9cc-449c-8fb0-3c85c1bc075f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4f6380e5-a9cc-449c-8fb0-3c85c1bc075f in namespace emptydir-wrapper-9551, will wait for the garbage collector to delete the pods Mar 23 23:41:13.212: INFO: Deleting ReplicationController wrapped-volume-race-4f6380e5-a9cc-449c-8fb0-3c85c1bc075f took: 15.844449ms Mar 23 23:41:13.512: INFO: Terminating ReplicationController wrapped-volume-race-4f6380e5-a9cc-449c-8fb0-3c85c1bc075f pods took: 300.248488ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:41:24.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9551" for this suite. • [SLOW TEST:90.042 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":23,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:41:24.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 23 23:41:28.584: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1462 PodName:pod-sharedvolume-3833df52-7703-4b71-849c-ccdbab7fd20f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:41:28.585: INFO: >>> kubeConfig: /root/.kube/config I0323 23:41:28.626132 7 log.go:172] (0xc002567810) (0xc000b9a140) Create stream I0323 23:41:28.626182 7 log.go:172] (0xc002567810) (0xc000b9a140) Stream added, broadcasting: 1 I0323 23:41:28.632369 7 log.go:172] (0xc002567810) Reply frame received for 1 I0323 23:41:28.632428 7 log.go:172] (0xc002567810) (0xc001100500) Create stream I0323 23:41:28.632451 7 log.go:172] (0xc002567810) (0xc001100500) Stream added, broadcasting: 3 I0323 23:41:28.633747 7 log.go:172] (0xc002567810) Reply frame received for 3 I0323 23:41:28.633806 7 log.go:172] (0xc002567810) (0xc000b9a1e0) Create stream I0323 23:41:28.633831 7 log.go:172] (0xc002567810) (0xc000b9a1e0) Stream added, broadcasting: 5 I0323 23:41:28.634986 7 log.go:172] (0xc002567810) Reply frame received for 5 I0323 23:41:28.700455 7 log.go:172] (0xc002567810) Data frame received for 5 I0323 23:41:28.700491 7 log.go:172] (0xc000b9a1e0) (5) Data frame handling I0323 23:41:28.700644 7 log.go:172] (0xc002567810) Data frame received for 3 I0323 23:41:28.700693 7 log.go:172] (0xc001100500) (3) Data frame handling I0323 23:41:28.700723 7 log.go:172] (0xc001100500) (3) Data frame sent I0323 23:41:28.700748 7 log.go:172] (0xc002567810) Data frame received for 3 I0323 23:41:28.700768 7 log.go:172] (0xc001100500) (3) Data frame handling I0323 23:41:28.702217 7 log.go:172] (0xc002567810) Data frame received for 1 I0323 23:41:28.702246 7 log.go:172] (0xc000b9a140) (1) Data frame handling I0323 23:41:28.702268 7 log.go:172] (0xc000b9a140) (1) Data frame sent I0323 23:41:28.702305 7 log.go:172] (0xc002567810) (0xc000b9a140) Stream removed, broadcasting: 1 I0323 23:41:28.702348 7 log.go:172] (0xc002567810) Go away received I0323 23:41:28.702531 7 log.go:172] (0xc002567810) (0xc000b9a140) Stream removed, broadcasting: 1 I0323 23:41:28.702568 7 log.go:172] (0xc002567810) (0xc001100500) Stream removed, broadcasting: 3 I0323 23:41:28.702589 7 log.go:172] (0xc002567810) (0xc000b9a1e0) Stream removed, broadcasting: 5 Mar 23 23:41:28.702: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:41:28.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1462" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":24,"skipped":408,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:41:28.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6014 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6014 I0323 23:41:28.859678 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6014, replica count: 2 I0323 23:41:31.910160 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:41:34.910464 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 23:41:34.910: INFO: Creating new exec pod Mar 23 23:41:39.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpodmm2dr -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 23 23:41:42.576: INFO: stderr: "I0323 23:41:42.486699 36 log.go:172] (0xc00003b130) (0xc0009ac3c0) Create stream\nI0323 23:41:42.486775 36 log.go:172] (0xc00003b130) (0xc0009ac3c0) Stream added, broadcasting: 1\nI0323 23:41:42.489588 36 log.go:172] (0xc00003b130) Reply frame received for 1\nI0323 23:41:42.489621 36 log.go:172] (0xc00003b130) (0xc000c6e000) Create stream\nI0323 23:41:42.489629 36 log.go:172] (0xc00003b130) (0xc000c6e000) Stream added, broadcasting: 3\nI0323 23:41:42.490815 36 log.go:172] (0xc00003b130) Reply frame received for 3\nI0323 23:41:42.490870 36 log.go:172] (0xc00003b130) (0xc00070a000) Create stream\nI0323 23:41:42.490890 36 log.go:172] (0xc00003b130) (0xc00070a000) Stream added, broadcasting: 5\nI0323 23:41:42.491856 36 log.go:172] (0xc00003b130) Reply frame received for 5\nI0323 23:41:42.568759 36 log.go:172] (0xc00003b130) Data frame received for 3\nI0323 23:41:42.568806 36 log.go:172] (0xc00003b130) Data frame received for 5\nI0323 23:41:42.568847 36 log.go:172] (0xc00070a000) (5) Data frame handling\nI0323 23:41:42.568873 36 log.go:172] (0xc00070a000) (5) Data frame sent\nI0323 23:41:42.568885 36 log.go:172] (0xc00003b130) Data frame received for 5\nI0323 23:41:42.568897 36 log.go:172] (0xc00070a000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0323 23:41:42.568932 36 log.go:172] (0xc000c6e000) (3) Data frame handling\nI0323 23:41:42.570908 36 log.go:172] (0xc00003b130) Data frame received for 1\nI0323 23:41:42.570929 36 log.go:172] (0xc0009ac3c0) (1) Data frame handling\nI0323 23:41:42.570941 36 log.go:172] (0xc0009ac3c0) (1) Data frame sent\nI0323 23:41:42.570953 36 log.go:172] (0xc00003b130) (0xc0009ac3c0) Stream removed, broadcasting: 1\nI0323 23:41:42.571055 36 log.go:172] (0xc00003b130) Go away received\nI0323 23:41:42.571431 36 log.go:172] (0xc00003b130) (0xc0009ac3c0) Stream removed, broadcasting: 1\nI0323 23:41:42.571453 36 log.go:172] (0xc00003b130) (0xc000c6e000) Stream removed, broadcasting: 3\nI0323 23:41:42.571466 36 log.go:172] (0xc00003b130) (0xc00070a000) Stream removed, broadcasting: 5\n" Mar 23 23:41:42.576: INFO: stdout: "" Mar 23 23:41:42.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpodmm2dr -- /bin/sh -x -c nc -zv -t -w 2 10.96.184.168 80' Mar 23 23:41:42.785: INFO: stderr: "I0323 23:41:42.712857 71 log.go:172] (0xc000a86000) (0xc000a98000) Create stream\nI0323 23:41:42.712930 71 log.go:172] (0xc000a86000) (0xc000a98000) Stream added, broadcasting: 1\nI0323 23:41:42.715104 71 log.go:172] (0xc000a86000) Reply frame received for 1\nI0323 23:41:42.715139 71 log.go:172] (0xc000a86000) (0xc00061ab40) Create stream\nI0323 23:41:42.715146 71 log.go:172] (0xc000a86000) (0xc00061ab40) Stream added, broadcasting: 3\nI0323 23:41:42.716232 71 log.go:172] (0xc000a86000) Reply frame received for 3\nI0323 23:41:42.716268 71 log.go:172] (0xc000a86000) (0xc0008492c0) Create stream\nI0323 23:41:42.716278 71 log.go:172] (0xc000a86000) (0xc0008492c0) Stream added, broadcasting: 5\nI0323 23:41:42.717522 71 log.go:172] (0xc000a86000) Reply frame received for 5\nI0323 23:41:42.779797 71 log.go:172] (0xc000a86000) Data frame received for 5\nI0323 23:41:42.779820 71 log.go:172] (0xc0008492c0) (5) Data frame handling\nI0323 23:41:42.779831 71 log.go:172] (0xc0008492c0) (5) Data frame sent\nI0323 23:41:42.779842 71 log.go:172] (0xc000a86000) Data frame received for 5\nI0323 23:41:42.779852 71 log.go:172] (0xc0008492c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.184.168 80\nConnection to 10.96.184.168 80 port [tcp/http] succeeded!\nI0323 23:41:42.779876 71 log.go:172] (0xc000a86000) Data frame received for 3\nI0323 23:41:42.779899 71 log.go:172] (0xc00061ab40) (3) Data frame handling\nI0323 23:41:42.781054 71 log.go:172] (0xc000a86000) Data frame received for 1\nI0323 23:41:42.781069 71 log.go:172] (0xc000a98000) (1) Data frame handling\nI0323 23:41:42.781084 71 log.go:172] (0xc000a98000) (1) Data frame sent\nI0323 23:41:42.781097 71 log.go:172] (0xc000a86000) (0xc000a98000) Stream removed, broadcasting: 1\nI0323 23:41:42.781224 71 log.go:172] (0xc000a86000) Go away received\nI0323 23:41:42.781472 71 log.go:172] (0xc000a86000) (0xc000a98000) Stream removed, broadcasting: 1\nI0323 23:41:42.781485 71 log.go:172] (0xc000a86000) (0xc00061ab40) Stream removed, broadcasting: 3\nI0323 23:41:42.781490 71 log.go:172] (0xc000a86000) (0xc0008492c0) Stream removed, broadcasting: 5\n" Mar 23 23:41:42.785: INFO: stdout: "" Mar 23 23:41:42.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpodmm2dr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30442' Mar 23 23:41:42.983: INFO: stderr: "I0323 23:41:42.924121 95 log.go:172] (0xc0009e0000) (0xc000434aa0) Create stream\nI0323 23:41:42.924193 95 log.go:172] (0xc0009e0000) (0xc000434aa0) Stream added, broadcasting: 1\nI0323 23:41:42.927280 95 log.go:172] (0xc0009e0000) Reply frame received for 1\nI0323 23:41:42.927314 95 log.go:172] (0xc0009e0000) (0xc0006c92c0) Create stream\nI0323 23:41:42.927322 95 log.go:172] (0xc0009e0000) (0xc0006c92c0) Stream added, broadcasting: 3\nI0323 23:41:42.928278 95 log.go:172] (0xc0009e0000) Reply frame received for 3\nI0323 23:41:42.928325 95 log.go:172] (0xc0009e0000) (0xc0006c9360) Create stream\nI0323 23:41:42.928341 95 log.go:172] (0xc0009e0000) (0xc0006c9360) Stream added, broadcasting: 5\nI0323 23:41:42.929417 95 log.go:172] (0xc0009e0000) Reply frame received for 5\nI0323 23:41:42.977353 95 log.go:172] (0xc0009e0000) Data frame received for 3\nI0323 23:41:42.977384 95 log.go:172] (0xc0006c92c0) (3) Data frame handling\nI0323 23:41:42.977400 95 log.go:172] (0xc0009e0000) Data frame received for 5\nI0323 23:41:42.977405 95 log.go:172] (0xc0006c9360) (5) Data frame handling\nI0323 23:41:42.977411 95 log.go:172] (0xc0006c9360) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 30442\nConnection to 172.17.0.13 30442 port [tcp/30442] succeeded!\nI0323 23:41:42.977541 95 log.go:172] (0xc0009e0000) Data frame received for 5\nI0323 23:41:42.977561 95 log.go:172] (0xc0006c9360) (5) Data frame handling\nI0323 23:41:42.979069 95 log.go:172] (0xc0009e0000) Data frame received for 1\nI0323 23:41:42.979083 95 log.go:172] (0xc000434aa0) (1) Data frame handling\nI0323 23:41:42.979090 95 log.go:172] (0xc000434aa0) (1) Data frame sent\nI0323 23:41:42.979096 95 log.go:172] (0xc0009e0000) (0xc000434aa0) Stream removed, broadcasting: 1\nI0323 23:41:42.979147 95 log.go:172] (0xc0009e0000) Go away received\nI0323 23:41:42.979362 95 log.go:172] (0xc0009e0000) (0xc000434aa0) Stream removed, broadcasting: 1\nI0323 23:41:42.979374 95 log.go:172] (0xc0009e0000) (0xc0006c92c0) Stream removed, broadcasting: 3\nI0323 23:41:42.979380 95 log.go:172] (0xc0009e0000) (0xc0006c9360) Stream removed, broadcasting: 5\n" Mar 23 23:41:42.984: INFO: stdout: "" Mar 23 23:41:42.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6014 execpodmm2dr -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30442' Mar 23 23:41:43.200: INFO: stderr: "I0323 23:41:43.128527 117 log.go:172] (0xc000b022c0) (0xc0006ff4a0) Create stream\nI0323 23:41:43.128572 117 log.go:172] (0xc000b022c0) (0xc0006ff4a0) Stream added, broadcasting: 1\nI0323 23:41:43.131167 117 log.go:172] (0xc000b022c0) Reply frame received for 1\nI0323 23:41:43.131192 117 log.go:172] (0xc000b022c0) (0xc00082e0a0) Create stream\nI0323 23:41:43.131203 117 log.go:172] (0xc000b022c0) (0xc00082e0a0) Stream added, broadcasting: 3\nI0323 23:41:43.132190 117 log.go:172] (0xc000b022c0) Reply frame received for 3\nI0323 23:41:43.132229 117 log.go:172] (0xc000b022c0) (0xc000376000) Create stream\nI0323 23:41:43.132245 117 log.go:172] (0xc000b022c0) (0xc000376000) Stream added, broadcasting: 5\nI0323 23:41:43.133448 117 log.go:172] (0xc000b022c0) Reply frame received for 5\nI0323 23:41:43.192505 117 log.go:172] (0xc000b022c0) Data frame received for 3\nI0323 23:41:43.192535 117 log.go:172] (0xc00082e0a0) (3) Data frame handling\nI0323 23:41:43.192554 117 log.go:172] (0xc000b022c0) Data frame received for 5\nI0323 23:41:43.192560 117 log.go:172] (0xc000376000) (5) Data frame handling\nI0323 23:41:43.192568 117 log.go:172] (0xc000376000) (5) Data frame sent\nI0323 23:41:43.192577 117 log.go:172] (0xc000b022c0) Data frame received for 5\nI0323 23:41:43.192585 117 log.go:172] (0xc000376000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 30442\nConnection to 172.17.0.12 30442 port [tcp/30442] succeeded!\nI0323 23:41:43.194212 117 log.go:172] (0xc000b022c0) Data frame received for 1\nI0323 23:41:43.194238 117 log.go:172] (0xc0006ff4a0) (1) Data frame handling\nI0323 23:41:43.194251 117 log.go:172] (0xc0006ff4a0) (1) Data frame sent\nI0323 23:41:43.194277 117 log.go:172] (0xc000b022c0) (0xc0006ff4a0) Stream removed, broadcasting: 1\nI0323 23:41:43.194299 117 log.go:172] (0xc000b022c0) Go away received\nI0323 23:41:43.194643 117 log.go:172] (0xc000b022c0) (0xc0006ff4a0) Stream removed, broadcasting: 1\nI0323 23:41:43.194670 117 log.go:172] (0xc000b022c0) (0xc00082e0a0) Stream removed, broadcasting: 3\nI0323 23:41:43.194678 117 log.go:172] (0xc000b022c0) (0xc000376000) Stream removed, broadcasting: 5\n" Mar 23 23:41:43.200: INFO: stdout: "" Mar 23 23:41:43.200: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:41:43.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6014" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.567 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":25,"skipped":412,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:41:43.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 23 23:41:51.419: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 23:41:51.448: INFO: Pod pod-with-poststart-exec-hook still exists Mar 23 23:41:53.448: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 23:41:53.451: INFO: Pod pod-with-poststart-exec-hook still exists Mar 23 23:41:55.448: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 23 23:41:55.453: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:41:55.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4847" for this suite. • [SLOW TEST:12.160 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":419,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:41:55.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:41:55.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899" in namespace "downward-api-9454" to be "Succeeded or Failed" Mar 23 23:41:55.567: INFO: Pod "downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899": Phase="Pending", Reason="", readiness=false. Elapsed: 3.947965ms Mar 23 23:41:57.580: INFO: Pod "downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017638018s Mar 23 23:41:59.585: INFO: Pod "downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022451914s STEP: Saw pod success Mar 23 23:41:59.585: INFO: Pod "downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899" satisfied condition "Succeeded or Failed" Mar 23 23:41:59.589: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899 container client-container: STEP: delete the pod Mar 23 23:41:59.668: INFO: Waiting for pod downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899 to disappear Mar 23 23:41:59.674: INFO: Pod downwardapi-volume-79966995-2b30-4311-aa8e-0400285a6899 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:41:59.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9454" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":423,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:41:59.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 23 23:41:59.730: INFO: Waiting up to 5m0s for pod "pod-5d2428be-a8b1-438f-aa77-6852d078802b" in namespace "emptydir-8486" to be "Succeeded or Failed" Mar 23 23:41:59.734: INFO: Pod "pod-5d2428be-a8b1-438f-aa77-6852d078802b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.37123ms Mar 23 23:42:01.759: INFO: Pod "pod-5d2428be-a8b1-438f-aa77-6852d078802b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028952903s Mar 23 23:42:03.762: INFO: Pod "pod-5d2428be-a8b1-438f-aa77-6852d078802b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032633668s STEP: Saw pod success Mar 23 23:42:03.762: INFO: Pod "pod-5d2428be-a8b1-438f-aa77-6852d078802b" satisfied condition "Succeeded or Failed" Mar 23 23:42:03.765: INFO: Trying to get logs from node latest-worker2 pod pod-5d2428be-a8b1-438f-aa77-6852d078802b container test-container: STEP: delete the pod Mar 23 23:42:03.833: INFO: Waiting for pod pod-5d2428be-a8b1-438f-aa77-6852d078802b to disappear Mar 23 23:42:03.839: INFO: Pod pod-5d2428be-a8b1-438f-aa77-6852d078802b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:42:03.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8486" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":429,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:42:03.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 23 23:42:08.504: INFO: Successfully updated pod "adopt-release-djw76" STEP: Checking that the Job readopts the Pod Mar 23 23:42:08.504: INFO: Waiting up to 15m0s for pod "adopt-release-djw76" in namespace "job-9849" to be "adopted" Mar 23 23:42:08.510: INFO: Pod "adopt-release-djw76": Phase="Running", Reason="", readiness=true. Elapsed: 5.535504ms Mar 23 23:42:10.514: INFO: Pod "adopt-release-djw76": Phase="Running", Reason="", readiness=true. Elapsed: 2.00963103s Mar 23 23:42:10.514: INFO: Pod "adopt-release-djw76" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 23 23:42:11.023: INFO: Successfully updated pod "adopt-release-djw76" STEP: Checking that the Job releases the Pod Mar 23 23:42:11.023: INFO: Waiting up to 15m0s for pod "adopt-release-djw76" in namespace "job-9849" to be "released" Mar 23 23:42:11.043: INFO: Pod "adopt-release-djw76": Phase="Running", Reason="", readiness=true. Elapsed: 20.36696ms Mar 23 23:42:13.048: INFO: Pod "adopt-release-djw76": Phase="Running", Reason="", readiness=true. Elapsed: 2.024930704s Mar 23 23:42:13.048: INFO: Pod "adopt-release-djw76" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:42:13.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9849" for this suite. • [SLOW TEST:9.212 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":29,"skipped":433,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:42:13.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 23 23:42:13.117: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 23 23:42:18.134: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:42:18.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8931" for this suite. • [SLOW TEST:5.219 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":30,"skipped":444,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:42:18.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-b064778e-e434-417b-a9a4-7c7163a7b04a STEP: Creating secret with name s-test-opt-upd-eefa36a5-b226-41cf-9321-44d81eb3d308 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b064778e-e434-417b-a9a4-7c7163a7b04a STEP: Updating secret s-test-opt-upd-eefa36a5-b226-41cf-9321-44d81eb3d308 STEP: Creating secret with name s-test-opt-create-0ecc3115-f8db-4c3b-8eab-cd5126c2f57a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7731" for this suite. • [SLOW TEST:82.630 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":449,"failed":0} [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:40.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 23 23:43:40.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Mar 23 23:43:41.145: INFO: stderr: "" Mar 23 23:43:41.145: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:41.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4699" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":32,"skipped":449,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:41.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:41.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1984" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:41.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:43:41.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73" in namespace "downward-api-1726" to be "Succeeded or Failed" Mar 23 23:43:41.423: INFO: Pod "downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73": Phase="Pending", Reason="", readiness=false. Elapsed: 14.470405ms Mar 23 23:43:43.428: INFO: Pod "downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019044493s Mar 23 23:43:45.432: INFO: Pod "downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023304967s STEP: Saw pod success Mar 23 23:43:45.432: INFO: Pod "downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73" satisfied condition "Succeeded or Failed" Mar 23 23:43:45.435: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73 container client-container: STEP: delete the pod Mar 23 23:43:45.474: INFO: Waiting for pod downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73 to disappear Mar 23 23:43:45.478: INFO: Pod downwardapi-volume-23b4be28-1e71-40bb-b04d-d2c35cbf8c73 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:45.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1726" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":484,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:45.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-ea55ca84-9a06-4886-87b4-e279d7c7de54 STEP: Creating a pod to test consume secrets Mar 23 23:43:45.560: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99" in namespace "projected-3650" to be "Succeeded or Failed" Mar 23 23:43:45.574: INFO: Pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99": Phase="Pending", Reason="", readiness=false. Elapsed: 13.84562ms Mar 23 23:43:47.577: INFO: Pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017608096s Mar 23 23:43:49.581: INFO: Pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99": Phase="Running", Reason="", readiness=true. Elapsed: 4.021285206s Mar 23 23:43:51.585: INFO: Pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025418269s STEP: Saw pod success Mar 23 23:43:51.585: INFO: Pod "pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99" satisfied condition "Succeeded or Failed" Mar 23 23:43:51.588: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99 container projected-secret-volume-test: STEP: delete the pod Mar 23 23:43:51.645: INFO: Waiting for pod pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99 to disappear Mar 23 23:43:51.650: INFO: Pod pod-projected-secrets-fcb77565-8463-4b73-8e90-72ab2c559c99 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3650" for this suite. • [SLOW TEST:6.173 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:51.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 23 23:43:51.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Mar 23 23:43:51.786: INFO: stderr: "" Mar 23 23:43:51.786: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:43:51.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2738" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":36,"skipped":532,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:43:51.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 23:43:52.748: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 23:43:54.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:43:56.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720603832, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 23:43:59.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:43:59.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3617-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:44:00.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-706" for this suite. STEP: Destroying namespace "webhook-706-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.258 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":37,"skipped":534,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:44:01.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-48b7f3bb-277b-4383-a510-ca8389f98a70 STEP: Creating a pod to test consume secrets Mar 23 23:44:01.133: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5" in namespace "projected-347" to be "Succeeded or Failed" Mar 23 23:44:01.174: INFO: Pod "pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5": Phase="Pending", Reason="", readiness=false. Elapsed: 40.575762ms Mar 23 23:44:03.178: INFO: Pod "pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045020845s Mar 23 23:44:05.182: INFO: Pod "pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049214765s STEP: Saw pod success Mar 23 23:44:05.182: INFO: Pod "pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5" satisfied condition "Succeeded or Failed" Mar 23 23:44:05.185: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5 container secret-volume-test: STEP: delete the pod Mar 23 23:44:05.203: INFO: Waiting for pod pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5 to disappear Mar 23 23:44:05.207: INFO: Pod pod-projected-secrets-ade3330d-e886-420c-bd70-f4c3b8a186e5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:44:05.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-347" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":38,"skipped":536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:44:05.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:44:09.393: INFO: Waiting up to 5m0s for pod "client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e" in namespace "pods-2120" to be "Succeeded or Failed" Mar 23 23:44:09.395: INFO: Pod "client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68964ms Mar 23 23:44:11.400: INFO: Pod "client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006717985s Mar 23 23:44:13.404: INFO: Pod "client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010716137s STEP: Saw pod success Mar 23 23:44:13.404: INFO: Pod "client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e" satisfied condition "Succeeded or Failed" Mar 23 23:44:13.407: INFO: Trying to get logs from node latest-worker2 pod client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e container env3cont: STEP: delete the pod Mar 23 23:44:13.439: INFO: Waiting for pod client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e to disappear Mar 23 23:44:13.456: INFO: Pod client-envvars-0ec83776-3d79-4ecc-9818-ae382f8ce51e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:44:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2120" for this suite. • [SLOW TEST:8.247 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:44:13.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-52w8 STEP: Creating a pod to test atomic-volume-subpath Mar 23 23:44:13.553: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-52w8" in namespace "subpath-2306" to be "Succeeded or Failed" Mar 23 23:44:13.557: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384121ms Mar 23 23:44:15.561: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008695014s Mar 23 23:44:17.566: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012996543s Mar 23 23:44:19.570: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 6.017219981s Mar 23 23:44:21.576: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 8.022977701s Mar 23 23:44:23.580: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 10.027134447s Mar 23 23:44:25.584: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 12.031181185s Mar 23 23:44:27.588: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 14.035754086s Mar 23 23:44:29.592: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 16.039807715s Mar 23 23:44:31.597: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 18.044303006s Mar 23 23:44:33.601: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 20.048434113s Mar 23 23:44:35.605: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Running", Reason="", readiness=true. Elapsed: 22.05280325s Mar 23 23:44:37.645: INFO: Pod "pod-subpath-test-secret-52w8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092233013s STEP: Saw pod success Mar 23 23:44:37.645: INFO: Pod "pod-subpath-test-secret-52w8" satisfied condition "Succeeded or Failed" Mar 23 23:44:37.648: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-52w8 container test-container-subpath-secret-52w8: STEP: delete the pod Mar 23 23:44:37.685: INFO: Waiting for pod pod-subpath-test-secret-52w8 to disappear Mar 23 23:44:37.693: INFO: Pod pod-subpath-test-secret-52w8 no longer exists STEP: Deleting pod pod-subpath-test-secret-52w8 Mar 23 23:44:37.693: INFO: Deleting pod "pod-subpath-test-secret-52w8" in namespace "subpath-2306" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:44:37.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2306" for this suite. • [SLOW TEST:24.238 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":40,"skipped":601,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:44:37.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 23 23:44:37.739: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 23 23:44:49.306: INFO: >>> kubeConfig: /root/.kube/config Mar 23 23:44:51.229: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:45:01.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1696" for this suite. • [SLOW TEST:24.164 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":41,"skipped":610,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:45:01.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6203 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6203 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6203 Mar 23 23:45:01.953: INFO: Found 0 stateful pods, waiting for 1 Mar 23 23:45:11.958: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 23 23:45:11.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 23:45:12.225: INFO: stderr: "I0323 23:45:12.088889 178 log.go:172] (0xc0009b9550) (0xc000a2c780) Create stream\nI0323 23:45:12.088941 178 log.go:172] (0xc0009b9550) (0xc000a2c780) Stream added, broadcasting: 1\nI0323 23:45:12.093641 178 log.go:172] (0xc0009b9550) Reply frame received for 1\nI0323 23:45:12.093682 178 log.go:172] (0xc0009b9550) (0xc0005db680) Create stream\nI0323 23:45:12.093703 178 log.go:172] (0xc0009b9550) (0xc0005db680) Stream added, broadcasting: 3\nI0323 23:45:12.094679 178 log.go:172] (0xc0009b9550) Reply frame received for 3\nI0323 23:45:12.094713 178 log.go:172] (0xc0009b9550) (0xc00050caa0) Create stream\nI0323 23:45:12.094723 178 log.go:172] (0xc0009b9550) (0xc00050caa0) Stream added, broadcasting: 5\nI0323 23:45:12.095535 178 log.go:172] (0xc0009b9550) Reply frame received for 5\nI0323 23:45:12.174584 178 log.go:172] (0xc0009b9550) Data frame received for 5\nI0323 23:45:12.174633 178 log.go:172] (0xc00050caa0) (5) Data frame handling\nI0323 23:45:12.174673 178 log.go:172] (0xc00050caa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 23:45:12.218086 178 log.go:172] (0xc0009b9550) Data frame received for 3\nI0323 23:45:12.218131 178 log.go:172] (0xc0005db680) (3) Data frame handling\nI0323 23:45:12.218215 178 log.go:172] (0xc0005db680) (3) Data frame sent\nI0323 23:45:12.218296 178 log.go:172] (0xc0009b9550) Data frame received for 3\nI0323 23:45:12.218332 178 log.go:172] (0xc0005db680) (3) Data frame handling\nI0323 23:45:12.218631 178 log.go:172] (0xc0009b9550) Data frame received for 5\nI0323 23:45:12.218659 178 log.go:172] (0xc00050caa0) (5) Data frame handling\nI0323 23:45:12.220521 178 log.go:172] (0xc0009b9550) Data frame received for 1\nI0323 23:45:12.220545 178 log.go:172] (0xc000a2c780) (1) Data frame handling\nI0323 23:45:12.220559 178 log.go:172] (0xc000a2c780) (1) Data frame sent\nI0323 23:45:12.220574 178 log.go:172] (0xc0009b9550) (0xc000a2c780) Stream removed, broadcasting: 1\nI0323 23:45:12.220589 178 log.go:172] (0xc0009b9550) Go away received\nI0323 23:45:12.221012 178 log.go:172] (0xc0009b9550) (0xc000a2c780) Stream removed, broadcasting: 1\nI0323 23:45:12.221040 178 log.go:172] (0xc0009b9550) (0xc0005db680) Stream removed, broadcasting: 3\nI0323 23:45:12.221059 178 log.go:172] (0xc0009b9550) (0xc00050caa0) Stream removed, broadcasting: 5\n" Mar 23 23:45:12.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 23:45:12.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 23:45:12.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 23 23:45:22.232: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 23:45:22.232: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 23:45:22.246: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999699s Mar 23 23:45:23.250: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993483861s Mar 23 23:45:24.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988742338s Mar 23 23:45:25.284: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.958998536s Mar 23 23:45:26.288: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.954321301s Mar 23 23:45:27.293: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.950358162s Mar 23 23:45:28.298: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.945779295s Mar 23 23:45:29.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.94129447s Mar 23 23:45:30.307: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.9362599s Mar 23 23:45:31.311: INFO: Verifying statefulset ss doesn't scale past 1 for another 932.002199ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6203 Mar 23 23:45:32.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 23:45:32.539: INFO: stderr: "I0323 23:45:32.439891 199 log.go:172] (0xc0008e8000) (0xc00069f2c0) Create stream\nI0323 23:45:32.439966 199 log.go:172] (0xc0008e8000) (0xc00069f2c0) Stream added, broadcasting: 1\nI0323 23:45:32.442916 199 log.go:172] (0xc0008e8000) Reply frame received for 1\nI0323 23:45:32.442970 199 log.go:172] (0xc0008e8000) (0xc00089e000) Create stream\nI0323 23:45:32.442984 199 log.go:172] (0xc0008e8000) (0xc00089e000) Stream added, broadcasting: 3\nI0323 23:45:32.444061 199 log.go:172] (0xc0008e8000) Reply frame received for 3\nI0323 23:45:32.444090 199 log.go:172] (0xc0008e8000) (0xc0008de000) Create stream\nI0323 23:45:32.444104 199 log.go:172] (0xc0008e8000) (0xc0008de000) Stream added, broadcasting: 5\nI0323 23:45:32.445076 199 log.go:172] (0xc0008e8000) Reply frame received for 5\nI0323 23:45:32.532977 199 log.go:172] (0xc0008e8000) Data frame received for 3\nI0323 23:45:32.533003 199 log.go:172] (0xc00089e000) (3) Data frame handling\nI0323 23:45:32.533030 199 log.go:172] (0xc0008e8000) Data frame received for 5\nI0323 23:45:32.533063 199 log.go:172] (0xc0008de000) (5) Data frame handling\nI0323 23:45:32.533077 199 log.go:172] (0xc0008de000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 23:45:32.533101 199 log.go:172] (0xc00089e000) (3) Data frame sent\nI0323 23:45:32.533338 199 log.go:172] (0xc0008e8000) Data frame received for 3\nI0323 23:45:32.533357 199 log.go:172] (0xc00089e000) (3) Data frame handling\nI0323 23:45:32.533384 199 log.go:172] (0xc0008e8000) Data frame received for 5\nI0323 23:45:32.533397 199 log.go:172] (0xc0008de000) (5) Data frame handling\nI0323 23:45:32.534662 199 log.go:172] (0xc0008e8000) Data frame received for 1\nI0323 23:45:32.534672 199 log.go:172] (0xc00069f2c0) (1) Data frame handling\nI0323 23:45:32.534679 199 log.go:172] (0xc00069f2c0) (1) Data frame sent\nI0323 23:45:32.534687 199 log.go:172] (0xc0008e8000) (0xc00069f2c0) Stream removed, broadcasting: 1\nI0323 23:45:32.534697 199 log.go:172] (0xc0008e8000) Go away received\nI0323 23:45:32.535148 199 log.go:172] (0xc0008e8000) (0xc00069f2c0) Stream removed, broadcasting: 1\nI0323 23:45:32.535177 199 log.go:172] (0xc0008e8000) (0xc00089e000) Stream removed, broadcasting: 3\nI0323 23:45:32.535191 199 log.go:172] (0xc0008e8000) (0xc0008de000) Stream removed, broadcasting: 5\n" Mar 23 23:45:32.539: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 23:45:32.539: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 23:45:32.543: INFO: Found 1 stateful pods, waiting for 3 Mar 23 23:45:42.547: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 23 23:45:42.547: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 23 23:45:42.547: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 23 23:45:42.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 23:45:42.814: INFO: stderr: "I0323 23:45:42.716798 218 log.go:172] (0xc0009ea000) (0xc0003bebe0) Create stream\nI0323 23:45:42.716871 218 log.go:172] (0xc0009ea000) (0xc0003bebe0) Stream added, broadcasting: 1\nI0323 23:45:42.738575 218 log.go:172] (0xc0009ea000) Reply frame received for 1\nI0323 23:45:42.738639 218 log.go:172] (0xc0009ea000) (0xc000a8a000) Create stream\nI0323 23:45:42.738666 218 log.go:172] (0xc0009ea000) (0xc000a8a000) Stream added, broadcasting: 3\nI0323 23:45:42.740778 218 log.go:172] (0xc0009ea000) Reply frame received for 3\nI0323 23:45:42.740814 218 log.go:172] (0xc0009ea000) (0xc000b76000) Create stream\nI0323 23:45:42.740824 218 log.go:172] (0xc0009ea000) (0xc000b76000) Stream added, broadcasting: 5\nI0323 23:45:42.742057 218 log.go:172] (0xc0009ea000) Reply frame received for 5\nI0323 23:45:42.808836 218 log.go:172] (0xc0009ea000) Data frame received for 3\nI0323 23:45:42.808883 218 log.go:172] (0xc000a8a000) (3) Data frame handling\nI0323 23:45:42.808903 218 log.go:172] (0xc000a8a000) (3) Data frame sent\nI0323 23:45:42.808915 218 log.go:172] (0xc0009ea000) Data frame received for 3\nI0323 23:45:42.808933 218 log.go:172] (0xc000a8a000) (3) Data frame handling\nI0323 23:45:42.808974 218 log.go:172] (0xc0009ea000) Data frame received for 5\nI0323 23:45:42.808998 218 log.go:172] (0xc000b76000) (5) Data frame handling\nI0323 23:45:42.809018 218 log.go:172] (0xc000b76000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 23:45:42.809031 218 log.go:172] (0xc0009ea000) Data frame received for 5\nI0323 23:45:42.809055 218 log.go:172] (0xc000b76000) (5) Data frame handling\nI0323 23:45:42.810290 218 log.go:172] (0xc0009ea000) Data frame received for 1\nI0323 23:45:42.810318 218 log.go:172] (0xc0003bebe0) (1) Data frame handling\nI0323 23:45:42.810340 218 log.go:172] (0xc0003bebe0) (1) Data frame sent\nI0323 23:45:42.810363 218 log.go:172] (0xc0009ea000) (0xc0003bebe0) Stream removed, broadcasting: 1\nI0323 23:45:42.810451 218 log.go:172] (0xc0009ea000) Go away received\nI0323 23:45:42.810718 218 log.go:172] (0xc0009ea000) (0xc0003bebe0) Stream removed, broadcasting: 1\nI0323 23:45:42.810733 218 log.go:172] (0xc0009ea000) (0xc000a8a000) Stream removed, broadcasting: 3\nI0323 23:45:42.810741 218 log.go:172] (0xc0009ea000) (0xc000b76000) Stream removed, broadcasting: 5\n" Mar 23 23:45:42.814: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 23:45:42.814: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 23:45:42.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 23:45:43.078: INFO: stderr: "I0323 23:45:42.940749 238 log.go:172] (0xc0009c6000) (0xc00021aaa0) Create stream\nI0323 23:45:42.940815 238 log.go:172] (0xc0009c6000) (0xc00021aaa0) Stream added, broadcasting: 1\nI0323 23:45:42.943921 238 log.go:172] (0xc0009c6000) Reply frame received for 1\nI0323 23:45:42.943971 238 log.go:172] (0xc0009c6000) (0xc000976000) Create stream\nI0323 23:45:42.943992 238 log.go:172] (0xc0009c6000) (0xc000976000) Stream added, broadcasting: 3\nI0323 23:45:42.945079 238 log.go:172] (0xc0009c6000) Reply frame received for 3\nI0323 23:45:42.945245 238 log.go:172] (0xc0009c6000) (0xc0009ca000) Create stream\nI0323 23:45:42.945266 238 log.go:172] (0xc0009c6000) (0xc0009ca000) Stream added, broadcasting: 5\nI0323 23:45:42.946289 238 log.go:172] (0xc0009c6000) Reply frame received for 5\nI0323 23:45:43.005849 238 log.go:172] (0xc0009c6000) Data frame received for 5\nI0323 23:45:43.005871 238 log.go:172] (0xc0009ca000) (5) Data frame handling\nI0323 23:45:43.005883 238 log.go:172] (0xc0009ca000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 23:45:43.073020 238 log.go:172] (0xc0009c6000) Data frame received for 3\nI0323 23:45:43.073047 238 log.go:172] (0xc000976000) (3) Data frame handling\nI0323 23:45:43.073054 238 log.go:172] (0xc000976000) (3) Data frame sent\nI0323 23:45:43.073059 238 log.go:172] (0xc0009c6000) Data frame received for 3\nI0323 23:45:43.073063 238 log.go:172] (0xc000976000) (3) Data frame handling\nI0323 23:45:43.073089 238 log.go:172] (0xc0009c6000) Data frame received for 5\nI0323 23:45:43.073271 238 log.go:172] (0xc0009ca000) (5) Data frame handling\nI0323 23:45:43.074895 238 log.go:172] (0xc0009c6000) Data frame received for 1\nI0323 23:45:43.074913 238 log.go:172] (0xc00021aaa0) (1) Data frame handling\nI0323 23:45:43.074921 238 log.go:172] (0xc00021aaa0) (1) Data frame sent\nI0323 23:45:43.074929 238 log.go:172] (0xc0009c6000) (0xc00021aaa0) Stream removed, broadcasting: 1\nI0323 23:45:43.075005 238 log.go:172] (0xc0009c6000) Go away received\nI0323 23:45:43.075180 238 log.go:172] (0xc0009c6000) (0xc00021aaa0) Stream removed, broadcasting: 1\nI0323 23:45:43.075193 238 log.go:172] (0xc0009c6000) (0xc000976000) Stream removed, broadcasting: 3\nI0323 23:45:43.075199 238 log.go:172] (0xc0009c6000) (0xc0009ca000) Stream removed, broadcasting: 5\n" Mar 23 23:45:43.078: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 23:45:43.078: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 23:45:43.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 23 23:45:43.312: INFO: stderr: "I0323 23:45:43.214687 257 log.go:172] (0xc00003a580) (0xc0003195e0) Create stream\nI0323 23:45:43.214776 257 log.go:172] (0xc00003a580) (0xc0003195e0) Stream added, broadcasting: 1\nI0323 23:45:43.218096 257 log.go:172] (0xc00003a580) Reply frame received for 1\nI0323 23:45:43.218134 257 log.go:172] (0xc00003a580) (0xc000ae0000) Create stream\nI0323 23:45:43.218142 257 log.go:172] (0xc00003a580) (0xc000ae0000) Stream added, broadcasting: 3\nI0323 23:45:43.219048 257 log.go:172] (0xc00003a580) Reply frame received for 3\nI0323 23:45:43.219112 257 log.go:172] (0xc00003a580) (0xc000acc000) Create stream\nI0323 23:45:43.219130 257 log.go:172] (0xc00003a580) (0xc000acc000) Stream added, broadcasting: 5\nI0323 23:45:43.220025 257 log.go:172] (0xc00003a580) Reply frame received for 5\nI0323 23:45:43.281984 257 log.go:172] (0xc00003a580) Data frame received for 5\nI0323 23:45:43.282035 257 log.go:172] (0xc000acc000) (5) Data frame handling\nI0323 23:45:43.282062 257 log.go:172] (0xc000acc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0323 23:45:43.306803 257 log.go:172] (0xc00003a580) Data frame received for 3\nI0323 23:45:43.306910 257 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0323 23:45:43.306924 257 log.go:172] (0xc000ae0000) (3) Data frame sent\nI0323 23:45:43.306930 257 log.go:172] (0xc00003a580) Data frame received for 3\nI0323 23:45:43.306934 257 log.go:172] (0xc000ae0000) (3) Data frame handling\nI0323 23:45:43.306959 257 log.go:172] (0xc00003a580) Data frame received for 5\nI0323 23:45:43.306972 257 log.go:172] (0xc000acc000) (5) Data frame handling\nI0323 23:45:43.308713 257 log.go:172] (0xc00003a580) Data frame received for 1\nI0323 23:45:43.308731 257 log.go:172] (0xc0003195e0) (1) Data frame handling\nI0323 23:45:43.308749 257 log.go:172] (0xc0003195e0) (1) Data frame sent\nI0323 23:45:43.308762 257 log.go:172] (0xc00003a580) (0xc0003195e0) Stream removed, broadcasting: 1\nI0323 23:45:43.308850 257 log.go:172] (0xc00003a580) Go away received\nI0323 23:45:43.309107 257 log.go:172] (0xc00003a580) (0xc0003195e0) Stream removed, broadcasting: 1\nI0323 23:45:43.309247 257 log.go:172] (0xc00003a580) (0xc000ae0000) Stream removed, broadcasting: 3\nI0323 23:45:43.309257 257 log.go:172] (0xc00003a580) (0xc000acc000) Stream removed, broadcasting: 5\n" Mar 23 23:45:43.313: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 23 23:45:43.313: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 23 23:45:43.313: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 23:45:43.316: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 23 23:45:53.325: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 23 23:45:53.325: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 23 23:45:53.325: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 23 23:45:53.348: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999721s Mar 23 23:45:54.353: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983540612s Mar 23 23:45:55.357: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978594299s Mar 23 23:45:56.362: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974525801s Mar 23 23:45:57.367: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969856307s Mar 23 23:45:58.372: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.965019637s Mar 23 23:45:59.377: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960127822s Mar 23 23:46:00.382: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.954851403s Mar 23 23:46:01.387: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.949790115s Mar 23 23:46:02.393: INFO: Verifying statefulset ss doesn't scale past 3 for another 945.046294ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6203 Mar 23 23:46:03.398: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 23:46:03.635: INFO: stderr: "I0323 23:46:03.532469 277 log.go:172] (0xc00096a630) (0xc000b2c140) Create stream\nI0323 23:46:03.532529 277 log.go:172] (0xc00096a630) (0xc000b2c140) Stream added, broadcasting: 1\nI0323 23:46:03.534951 277 log.go:172] (0xc00096a630) Reply frame received for 1\nI0323 23:46:03.534999 277 log.go:172] (0xc00096a630) (0xc00044aa00) Create stream\nI0323 23:46:03.535021 277 log.go:172] (0xc00096a630) (0xc00044aa00) Stream added, broadcasting: 3\nI0323 23:46:03.536177 277 log.go:172] (0xc00096a630) Reply frame received for 3\nI0323 23:46:03.536226 277 log.go:172] (0xc00096a630) (0xc000b2c1e0) Create stream\nI0323 23:46:03.536238 277 log.go:172] (0xc00096a630) (0xc000b2c1e0) Stream added, broadcasting: 5\nI0323 23:46:03.537388 277 log.go:172] (0xc00096a630) Reply frame received for 5\nI0323 23:46:03.628480 277 log.go:172] (0xc00096a630) Data frame received for 3\nI0323 23:46:03.628522 277 log.go:172] (0xc00044aa00) (3) Data frame handling\nI0323 23:46:03.628545 277 log.go:172] (0xc00044aa00) (3) Data frame sent\nI0323 23:46:03.628557 277 log.go:172] (0xc00096a630) Data frame received for 3\nI0323 23:46:03.628575 277 log.go:172] (0xc00044aa00) (3) Data frame handling\nI0323 23:46:03.628691 277 log.go:172] (0xc00096a630) Data frame received for 5\nI0323 23:46:03.628726 277 log.go:172] (0xc000b2c1e0) (5) Data frame handling\nI0323 23:46:03.628747 277 log.go:172] (0xc000b2c1e0) (5) Data frame sent\nI0323 23:46:03.628756 277 log.go:172] (0xc00096a630) Data frame received for 5\nI0323 23:46:03.628767 277 log.go:172] (0xc000b2c1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 23:46:03.630570 277 log.go:172] (0xc00096a630) Data frame received for 1\nI0323 23:46:03.630614 277 log.go:172] (0xc000b2c140) (1) Data frame handling\nI0323 23:46:03.630654 277 log.go:172] (0xc000b2c140) (1) Data frame sent\nI0323 23:46:03.630675 277 log.go:172] (0xc00096a630) (0xc000b2c140) Stream removed, broadcasting: 1\nI0323 23:46:03.630697 277 log.go:172] (0xc00096a630) Go away received\nI0323 23:46:03.631221 277 log.go:172] (0xc00096a630) (0xc000b2c140) Stream removed, broadcasting: 1\nI0323 23:46:03.631248 277 log.go:172] (0xc00096a630) (0xc00044aa00) Stream removed, broadcasting: 3\nI0323 23:46:03.631261 277 log.go:172] (0xc00096a630) (0xc000b2c1e0) Stream removed, broadcasting: 5\n" Mar 23 23:46:03.636: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 23:46:03.636: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 23:46:03.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 23:46:03.820: INFO: stderr: "I0323 23:46:03.748160 300 log.go:172] (0xc0009b62c0) (0xc00099c140) Create stream\nI0323 23:46:03.748242 300 log.go:172] (0xc0009b62c0) (0xc00099c140) Stream added, broadcasting: 1\nI0323 23:46:03.753365 300 log.go:172] (0xc0009b62c0) Reply frame received for 1\nI0323 23:46:03.753424 300 log.go:172] (0xc0009b62c0) (0xc00064b5e0) Create stream\nI0323 23:46:03.753439 300 log.go:172] (0xc0009b62c0) (0xc00064b5e0) Stream added, broadcasting: 3\nI0323 23:46:03.754846 300 log.go:172] (0xc0009b62c0) Reply frame received for 3\nI0323 23:46:03.754888 300 log.go:172] (0xc0009b62c0) (0xc0004eaa00) Create stream\nI0323 23:46:03.754907 300 log.go:172] (0xc0009b62c0) (0xc0004eaa00) Stream added, broadcasting: 5\nI0323 23:46:03.755882 300 log.go:172] (0xc0009b62c0) Reply frame received for 5\nI0323 23:46:03.815335 300 log.go:172] (0xc0009b62c0) Data frame received for 5\nI0323 23:46:03.815385 300 log.go:172] (0xc0004eaa00) (5) Data frame handling\nI0323 23:46:03.815405 300 log.go:172] (0xc0004eaa00) (5) Data frame sent\nI0323 23:46:03.815422 300 log.go:172] (0xc0009b62c0) Data frame received for 5\nI0323 23:46:03.815443 300 log.go:172] (0xc0004eaa00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 23:46:03.815487 300 log.go:172] (0xc0009b62c0) Data frame received for 3\nI0323 23:46:03.815525 300 log.go:172] (0xc00064b5e0) (3) Data frame handling\nI0323 23:46:03.815550 300 log.go:172] (0xc00064b5e0) (3) Data frame sent\nI0323 23:46:03.815566 300 log.go:172] (0xc0009b62c0) Data frame received for 3\nI0323 23:46:03.815579 300 log.go:172] (0xc00064b5e0) (3) Data frame handling\nI0323 23:46:03.816605 300 log.go:172] (0xc0009b62c0) Data frame received for 1\nI0323 23:46:03.816623 300 log.go:172] (0xc00099c140) (1) Data frame handling\nI0323 23:46:03.816635 300 log.go:172] (0xc00099c140) (1) Data frame sent\nI0323 23:46:03.816648 300 log.go:172] (0xc0009b62c0) (0xc00099c140) Stream removed, broadcasting: 1\nI0323 23:46:03.816665 300 log.go:172] (0xc0009b62c0) Go away received\nI0323 23:46:03.816959 300 log.go:172] (0xc0009b62c0) (0xc00099c140) Stream removed, broadcasting: 1\nI0323 23:46:03.816977 300 log.go:172] (0xc0009b62c0) (0xc00064b5e0) Stream removed, broadcasting: 3\nI0323 23:46:03.816986 300 log.go:172] (0xc0009b62c0) (0xc0004eaa00) Stream removed, broadcasting: 5\n" Mar 23 23:46:03.820: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 23:46:03.820: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 23:46:03.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6203 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 23 23:46:03.993: INFO: stderr: "I0323 23:46:03.928989 320 log.go:172] (0xc000929340) (0xc000a5a6e0) Create stream\nI0323 23:46:03.929035 320 log.go:172] (0xc000929340) (0xc000a5a6e0) Stream added, broadcasting: 1\nI0323 23:46:03.933477 320 log.go:172] (0xc000929340) Reply frame received for 1\nI0323 23:46:03.933509 320 log.go:172] (0xc000929340) (0xc00067f5e0) Create stream\nI0323 23:46:03.933517 320 log.go:172] (0xc000929340) (0xc00067f5e0) Stream added, broadcasting: 3\nI0323 23:46:03.934459 320 log.go:172] (0xc000929340) Reply frame received for 3\nI0323 23:46:03.934481 320 log.go:172] (0xc000929340) (0xc000556a00) Create stream\nI0323 23:46:03.934489 320 log.go:172] (0xc000929340) (0xc000556a00) Stream added, broadcasting: 5\nI0323 23:46:03.935448 320 log.go:172] (0xc000929340) Reply frame received for 5\nI0323 23:46:03.986803 320 log.go:172] (0xc000929340) Data frame received for 3\nI0323 23:46:03.986828 320 log.go:172] (0xc00067f5e0) (3) Data frame handling\nI0323 23:46:03.986871 320 log.go:172] (0xc00067f5e0) (3) Data frame sent\nI0323 23:46:03.986891 320 log.go:172] (0xc000929340) Data frame received for 3\nI0323 23:46:03.986908 320 log.go:172] (0xc00067f5e0) (3) Data frame handling\nI0323 23:46:03.987125 320 log.go:172] (0xc000929340) Data frame received for 5\nI0323 23:46:03.987173 320 log.go:172] (0xc000556a00) (5) Data frame handling\nI0323 23:46:03.987209 320 log.go:172] (0xc000556a00) (5) Data frame sent\nI0323 23:46:03.987234 320 log.go:172] (0xc000929340) Data frame received for 5\nI0323 23:46:03.987250 320 log.go:172] (0xc000556a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0323 23:46:03.988739 320 log.go:172] (0xc000929340) Data frame received for 1\nI0323 23:46:03.988755 320 log.go:172] (0xc000a5a6e0) (1) Data frame handling\nI0323 23:46:03.988782 320 log.go:172] (0xc000a5a6e0) (1) Data frame sent\nI0323 23:46:03.988800 320 log.go:172] (0xc000929340) (0xc000a5a6e0) Stream removed, broadcasting: 1\nI0323 23:46:03.988829 320 log.go:172] (0xc000929340) Go away received\nI0323 23:46:03.989412 320 log.go:172] (0xc000929340) (0xc000a5a6e0) Stream removed, broadcasting: 1\nI0323 23:46:03.989437 320 log.go:172] (0xc000929340) (0xc00067f5e0) Stream removed, broadcasting: 3\nI0323 23:46:03.989450 320 log.go:172] (0xc000929340) (0xc000556a00) Stream removed, broadcasting: 5\n" Mar 23 23:46:03.993: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 23 23:46:03.993: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 23 23:46:03.993: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 23:46:24.010: INFO: Deleting all statefulset in ns statefulset-6203 Mar 23 23:46:24.013: INFO: Scaling statefulset ss to 0 Mar 23 23:46:24.022: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 23:46:24.024: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:24.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6203" for this suite. • [SLOW TEST:82.187 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":42,"skipped":629,"failed":0} [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:24.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 23 23:46:24.125: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix285049117/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:24.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6182" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":43,"skipped":629,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:24.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 23 23:46:28.809: INFO: Successfully updated pod "pod-update-activedeadlineseconds-323e51ff-cb81-4be5-acfb-865e2bbf7b22" Mar 23 23:46:28.809: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-323e51ff-cb81-4be5-acfb-865e2bbf7b22" in namespace "pods-4660" to be "terminated due to deadline exceeded" Mar 23 23:46:28.825: INFO: Pod "pod-update-activedeadlineseconds-323e51ff-cb81-4be5-acfb-865e2bbf7b22": Phase="Running", Reason="", readiness=true. Elapsed: 15.723121ms Mar 23 23:46:30.830: INFO: Pod "pod-update-activedeadlineseconds-323e51ff-cb81-4be5-acfb-865e2bbf7b22": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.020240807s Mar 23 23:46:30.830: INFO: Pod "pod-update-activedeadlineseconds-323e51ff-cb81-4be5-acfb-865e2bbf7b22" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:30.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4660" for this suite. • [SLOW TEST:6.636 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":643,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:30.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 23 23:46:30.908: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8169 /api/v1/namespaces/watch-8169/configmaps/e2e-watch-test-watch-closed c8d9debc-c1e9-49b4-8204-31ad43be53a6 2271260 0 2020-03-23 23:46:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 23:46:30.908: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8169 /api/v1/namespaces/watch-8169/configmaps/e2e-watch-test-watch-closed c8d9debc-c1e9-49b4-8204-31ad43be53a6 2271261 0 2020-03-23 23:46:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 23 23:46:30.921: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8169 /api/v1/namespaces/watch-8169/configmaps/e2e-watch-test-watch-closed c8d9debc-c1e9-49b4-8204-31ad43be53a6 2271262 0 2020-03-23 23:46:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 23 23:46:30.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8169 /api/v1/namespaces/watch-8169/configmaps/e2e-watch-test-watch-closed c8d9debc-c1e9-49b4-8204-31ad43be53a6 2271263 0 2020-03-23 23:46:30 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:30.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8169" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":45,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:30.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 23 23:46:30.987: INFO: Waiting up to 5m0s for pod "pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0" in namespace "emptydir-6579" to be "Succeeded or Failed" Mar 23 23:46:30.991: INFO: Pod "pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.768878ms Mar 23 23:46:32.994: INFO: Pod "pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007143356s Mar 23 23:46:34.999: INFO: Pod "pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011951969s STEP: Saw pod success Mar 23 23:46:34.999: INFO: Pod "pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0" satisfied condition "Succeeded or Failed" Mar 23 23:46:35.002: INFO: Trying to get logs from node latest-worker2 pod pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0 container test-container: STEP: delete the pod Mar 23 23:46:35.028: INFO: Waiting for pod pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0 to disappear Mar 23 23:46:35.032: INFO: Pod pod-93aa8577-d84c-4418-8d4c-c90c6aae66a0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:35.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6579" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":669,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:35.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-d2432675-b98f-4c4e-9413-68ef741c9786 STEP: Creating a pod to test consume secrets Mar 23 23:46:35.101: INFO: Waiting up to 5m0s for pod "pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f" in namespace "secrets-3140" to be "Succeeded or Failed" Mar 23 23:46:35.155: INFO: Pod "pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f": Phase="Pending", Reason="", readiness=false. Elapsed: 54.157435ms Mar 23 23:46:37.221: INFO: Pod "pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120189824s Mar 23 23:46:39.228: INFO: Pod "pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126835473s STEP: Saw pod success Mar 23 23:46:39.228: INFO: Pod "pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f" satisfied condition "Succeeded or Failed" Mar 23 23:46:39.231: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f container secret-volume-test: STEP: delete the pod Mar 23 23:46:39.366: INFO: Waiting for pod pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f to disappear Mar 23 23:46:39.400: INFO: Pod pod-secrets-a08ec0fc-2dc6-416e-94a9-9c6f3537e78f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:39.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3140" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:39.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:43.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4569" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":701,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:43.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 23:46:43.569: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:46:50.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2736" for this suite. • [SLOW TEST:6.590 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":49,"skipped":708,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:46:50.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-3ba83889-a71e-4a25-8299-e19683b1d1a8 in namespace container-probe-7700 Mar 23 23:46:54.211: INFO: Started pod busybox-3ba83889-a71e-4a25-8299-e19683b1d1a8 in namespace container-probe-7700 STEP: checking the pod's current state and verifying that restartCount is present Mar 23 23:46:54.214: INFO: Initial restart count of pod busybox-3ba83889-a71e-4a25-8299-e19683b1d1a8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:50:54.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7700" for this suite. • [SLOW TEST:244.698 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":725,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:50:54.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:50:54.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2746" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":51,"skipped":726,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:50:54.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:05.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7562" for this suite. • [SLOW TEST:11.080 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":52,"skipped":728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:05.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:10.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9492" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":53,"skipped":774,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:10.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 23 23:51:10.104: INFO: namespace kubectl-4399 Mar 23 23:51:10.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4399' Mar 23 23:51:10.443: INFO: stderr: "" Mar 23 23:51:10.443: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 23 23:51:11.448: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 23:51:11.448: INFO: Found 0 / 1 Mar 23 23:51:12.448: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 23:51:12.448: INFO: Found 0 / 1 Mar 23 23:51:13.447: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 23:51:13.447: INFO: Found 0 / 1 Mar 23 23:51:14.447: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 23:51:14.447: INFO: Found 1 / 1 Mar 23 23:51:14.447: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 23 23:51:14.451: INFO: Selector matched 1 pods for map[app:agnhost] Mar 23 23:51:14.451: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 23 23:51:14.451: INFO: wait on agnhost-master startup in kubectl-4399 Mar 23 23:51:14.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-gzdvx agnhost-master --namespace=kubectl-4399' Mar 23 23:51:14.567: INFO: stderr: "" Mar 23 23:51:14.567: INFO: stdout: "Paused\n" STEP: exposing RC Mar 23 23:51:14.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4399' Mar 23 23:51:14.693: INFO: stderr: "" Mar 23 23:51:14.693: INFO: stdout: "service/rm2 exposed\n" Mar 23 23:51:14.699: INFO: Service rm2 in namespace kubectl-4399 found. STEP: exposing service Mar 23 23:51:16.706: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4399' Mar 23 23:51:16.830: INFO: stderr: "" Mar 23 23:51:16.830: INFO: stdout: "service/rm3 exposed\n" Mar 23 23:51:16.883: INFO: Service rm3 in namespace kubectl-4399 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4399" for this suite. • [SLOW TEST:8.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":54,"skipped":776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:18.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:32.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5195" for this suite. • [SLOW TEST:13.229 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":55,"skipped":801,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:32.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 23 23:51:32.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-936' Mar 23 23:51:32.298: INFO: stderr: "" Mar 23 23:51:32.298: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 23 23:51:37.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-936 -o json' Mar 23 23:51:37.438: INFO: stderr: "" Mar 23 23:51:37.438: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-23T23:51:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-936\",\n \"resourceVersion\": \"2272432\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-936/pods/e2e-test-httpd-pod\",\n \"uid\": \"ca723082-a15f-4fa4-80da-7d5b55648d44\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-sfr9q\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-sfr9q\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-sfr9q\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T23:51:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T23:51:35Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T23:51:35Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-23T23:51:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://8c308870cf41ff5ff75114d4093a8fe60959e1c947f8688fc13b51ccc90cb77f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-23T23:51:34Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.29\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.29\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-23T23:51:32Z\"\n }\n}\n" STEP: replace the image in the pod Mar 23 23:51:37.438: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-936' Mar 23 23:51:37.663: INFO: stderr: "" Mar 23 23:51:37.663: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 23 23:51:37.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-936' Mar 23 23:51:43.022: INFO: stderr: "" Mar 23 23:51:43.022: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:43.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-936" for this suite. • [SLOW TEST:10.901 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":56,"skipped":811,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:43.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:51:43.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11" in namespace "projected-4761" to be "Succeeded or Failed" Mar 23 23:51:43.098: INFO: Pod "downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417672ms Mar 23 23:51:45.103: INFO: Pod "downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008893426s Mar 23 23:51:47.107: INFO: Pod "downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013768439s STEP: Saw pod success Mar 23 23:51:47.108: INFO: Pod "downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11" satisfied condition "Succeeded or Failed" Mar 23 23:51:47.111: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11 container client-container: STEP: delete the pod Mar 23 23:51:47.167: INFO: Waiting for pod downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11 to disappear Mar 23 23:51:47.182: INFO: Pod downwardapi-volume-7b3f6bf3-cd7f-443a-9797-d6134a17ad11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:47.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4761" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:47.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 23 23:51:51.343: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:51.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6311" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":901,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:51.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 23 23:51:51.891: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 23 23:51:53.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604311, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604311, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604311, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604311, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 23:51:56.927: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:51:56.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:58.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7216" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.828 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":59,"skipped":913,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:58.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-1b4e6005-6fab-474e-b90f-3855799d8622 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:51:58.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8496" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":60,"skipped":925,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:51:58.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 23:51:58.748: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 23:52:00.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604318, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604318, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604318, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604318, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 23:52:03.796: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:03.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1609" for this suite. STEP: Destroying namespace "webhook-1609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.645 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":61,"skipped":926,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:03.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:52:04.045: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:04.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-458" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":62,"skipped":934,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:04.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 23 23:52:04.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8981' Mar 23 23:52:07.440: INFO: stderr: "" Mar 23 23:52:07.440: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 23 23:52:07.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8981' Mar 23 23:52:09.175: INFO: stderr: "" Mar 23 23:52:09.175: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:09.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8981" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":63,"skipped":943,"failed":0} ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:09.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 23:52:09.295: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4321" for this suite. • [SLOW TEST:5.441 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":64,"skipped":943,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:14.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6298 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 23 23:52:14.701: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 23 23:52:14.782: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 23:52:16.822: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 23 23:52:18.786: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:20.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:22.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:24.792: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:26.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:28.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:30.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:32.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:34.787: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 23 23:52:36.787: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 23 23:52:36.793: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 23 23:52:40.828: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.153:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6298 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:52:40.828: INFO: >>> kubeConfig: /root/.kube/config I0323 23:52:40.867279 7 log.go:172] (0xc0024ffc30) (0xc0016bae60) Create stream I0323 23:52:40.867312 7 log.go:172] (0xc0024ffc30) (0xc0016bae60) Stream added, broadcasting: 1 I0323 23:52:40.869525 7 log.go:172] (0xc0024ffc30) Reply frame received for 1 I0323 23:52:40.869576 7 log.go:172] (0xc0024ffc30) (0xc001540be0) Create stream I0323 23:52:40.869592 7 log.go:172] (0xc0024ffc30) (0xc001540be0) Stream added, broadcasting: 3 I0323 23:52:40.870759 7 log.go:172] (0xc0024ffc30) Reply frame received for 3 I0323 23:52:40.870807 7 log.go:172] (0xc0024ffc30) (0xc000b9a140) Create stream I0323 23:52:40.870824 7 log.go:172] (0xc0024ffc30) (0xc000b9a140) Stream added, broadcasting: 5 I0323 23:52:40.872026 7 log.go:172] (0xc0024ffc30) Reply frame received for 5 I0323 23:52:40.968896 7 log.go:172] (0xc0024ffc30) Data frame received for 5 I0323 23:52:40.968923 7 log.go:172] (0xc000b9a140) (5) Data frame handling I0323 23:52:40.968940 7 log.go:172] (0xc0024ffc30) Data frame received for 3 I0323 23:52:40.968951 7 log.go:172] (0xc001540be0) (3) Data frame handling I0323 23:52:40.968966 7 log.go:172] (0xc001540be0) (3) Data frame sent I0323 23:52:40.968973 7 log.go:172] (0xc0024ffc30) Data frame received for 3 I0323 23:52:40.968980 7 log.go:172] (0xc001540be0) (3) Data frame handling I0323 23:52:40.970093 7 log.go:172] (0xc0024ffc30) Data frame received for 1 I0323 23:52:40.970159 7 log.go:172] (0xc0016bae60) (1) Data frame handling I0323 23:52:40.970194 7 log.go:172] (0xc0016bae60) (1) Data frame sent I0323 23:52:40.970214 7 log.go:172] (0xc0024ffc30) (0xc0016bae60) Stream removed, broadcasting: 1 I0323 23:52:40.970230 7 log.go:172] (0xc0024ffc30) Go away received I0323 23:52:40.970314 7 log.go:172] (0xc0024ffc30) (0xc0016bae60) Stream removed, broadcasting: 1 I0323 23:52:40.970328 7 log.go:172] (0xc0024ffc30) (0xc001540be0) Stream removed, broadcasting: 3 I0323 23:52:40.970333 7 log.go:172] (0xc0024ffc30) (0xc000b9a140) Stream removed, broadcasting: 5 Mar 23 23:52:40.970: INFO: Found all expected endpoints: [netserver-0] Mar 23 23:52:40.973: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.33:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6298 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:52:40.973: INFO: >>> kubeConfig: /root/.kube/config I0323 23:52:40.998879 7 log.go:172] (0xc002567ad0) (0xc0016bb5e0) Create stream I0323 23:52:40.998903 7 log.go:172] (0xc002567ad0) (0xc0016bb5e0) Stream added, broadcasting: 1 I0323 23:52:41.005041 7 log.go:172] (0xc002567ad0) Reply frame received for 1 I0323 23:52:41.005088 7 log.go:172] (0xc002567ad0) (0xc0016bb720) Create stream I0323 23:52:41.005101 7 log.go:172] (0xc002567ad0) (0xc0016bb720) Stream added, broadcasting: 3 I0323 23:52:41.006149 7 log.go:172] (0xc002567ad0) Reply frame received for 3 I0323 23:52:41.006181 7 log.go:172] (0xc002567ad0) (0xc001540d20) Create stream I0323 23:52:41.006192 7 log.go:172] (0xc002567ad0) (0xc001540d20) Stream added, broadcasting: 5 I0323 23:52:41.007055 7 log.go:172] (0xc002567ad0) Reply frame received for 5 I0323 23:52:41.060599 7 log.go:172] (0xc002567ad0) Data frame received for 3 I0323 23:52:41.060640 7 log.go:172] (0xc0016bb720) (3) Data frame handling I0323 23:52:41.060652 7 log.go:172] (0xc0016bb720) (3) Data frame sent I0323 23:52:41.060666 7 log.go:172] (0xc002567ad0) Data frame received for 3 I0323 23:52:41.060679 7 log.go:172] (0xc0016bb720) (3) Data frame handling I0323 23:52:41.060709 7 log.go:172] (0xc002567ad0) Data frame received for 5 I0323 23:52:41.060725 7 log.go:172] (0xc001540d20) (5) Data frame handling I0323 23:52:41.062551 7 log.go:172] (0xc002567ad0) Data frame received for 1 I0323 23:52:41.062572 7 log.go:172] (0xc0016bb5e0) (1) Data frame handling I0323 23:52:41.062592 7 log.go:172] (0xc0016bb5e0) (1) Data frame sent I0323 23:52:41.062606 7 log.go:172] (0xc002567ad0) (0xc0016bb5e0) Stream removed, broadcasting: 1 I0323 23:52:41.062667 7 log.go:172] (0xc002567ad0) Go away received I0323 23:52:41.062751 7 log.go:172] (0xc002567ad0) (0xc0016bb5e0) Stream removed, broadcasting: 1 I0323 23:52:41.062789 7 log.go:172] (0xc002567ad0) (0xc0016bb720) Stream removed, broadcasting: 3 I0323 23:52:41.062817 7 log.go:172] (0xc002567ad0) (0xc001540d20) Stream removed, broadcasting: 5 Mar 23 23:52:41.062: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:41.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6298" for this suite. • [SLOW TEST:26.414 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":962,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:41.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 23:52:41.151: INFO: Waiting up to 5m0s for pod "downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca" in namespace "downward-api-413" to be "Succeeded or Failed" Mar 23 23:52:41.167: INFO: Pod "downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 15.332368ms Mar 23 23:52:43.171: INFO: Pod "downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01930269s Mar 23 23:52:45.174: INFO: Pod "downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023075031s STEP: Saw pod success Mar 23 23:52:45.174: INFO: Pod "downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca" satisfied condition "Succeeded or Failed" Mar 23 23:52:45.178: INFO: Trying to get logs from node latest-worker2 pod downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca container dapi-container: STEP: delete the pod Mar 23 23:52:45.197: INFO: Waiting for pod downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca to disappear Mar 23 23:52:45.201: INFO: Pod downward-api-68485f46-f205-4a2d-8be1-44c25013e3ca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:45.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-413" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":967,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:45.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 23:52:45.318: INFO: Waiting up to 5m0s for pod "downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499" in namespace "downward-api-7553" to be "Succeeded or Failed" Mar 23 23:52:45.335: INFO: Pod "downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499": Phase="Pending", Reason="", readiness=false. Elapsed: 17.805818ms Mar 23 23:52:47.339: INFO: Pod "downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021796619s Mar 23 23:52:49.343: INFO: Pod "downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025066533s STEP: Saw pod success Mar 23 23:52:49.343: INFO: Pod "downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499" satisfied condition "Succeeded or Failed" Mar 23 23:52:49.345: INFO: Trying to get logs from node latest-worker pod downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499 container dapi-container: STEP: delete the pod Mar 23 23:52:49.382: INFO: Waiting for pod downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499 to disappear Mar 23 23:52:49.423: INFO: Pod downward-api-d3dd295d-e8a9-463c-b549-b58d689b0499 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:49.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7553" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:49.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:52:49.472: INFO: Creating ReplicaSet my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef Mar 23 23:52:49.494: INFO: Pod name my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef: Found 0 pods out of 1 Mar 23 23:52:54.514: INFO: Pod name my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef: Found 1 pods out of 1 Mar 23 23:52:54.514: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef" is running Mar 23 23:52:54.518: INFO: Pod "my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef-kgcnc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:52:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:52:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:52:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:52:49 +0000 UTC Reason: Message:}]) Mar 23 23:52:54.518: INFO: Trying to dial the pod Mar 23 23:52:59.527: INFO: Controller my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef: Got expected result from replica 1 [my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef-kgcnc]: "my-hostname-basic-51b26c34-45b7-4355-b0d5-67ee58ab78ef-kgcnc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:52:59.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8641" for this suite. • [SLOW TEST:10.104 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":68,"skipped":1004,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:52:59.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:52:59.640: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4" in namespace "projected-7166" to be "Succeeded or Failed" Mar 23 23:52:59.645: INFO: Pod "downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.145099ms Mar 23 23:53:01.649: INFO: Pod "downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009540729s Mar 23 23:53:03.654: INFO: Pod "downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013892331s STEP: Saw pod success Mar 23 23:53:03.654: INFO: Pod "downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4" satisfied condition "Succeeded or Failed" Mar 23 23:53:03.657: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4 container client-container: STEP: delete the pod Mar 23 23:53:03.693: INFO: Waiting for pod downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4 to disappear Mar 23 23:53:03.705: INFO: Pod downwardapi-volume-bc161eac-6b0e-4a20-b08f-65eeac3f5ab4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:03.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7166" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1026,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:03.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0323 23:53:04.853318 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 23 23:53:04.853: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:04.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3921" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":70,"skipped":1037,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:04.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 23 23:53:13.002: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 23 23:53:13.026: INFO: Pod pod-with-prestop-http-hook still exists Mar 23 23:53:15.026: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 23 23:53:15.031: INFO: Pod pod-with-prestop-http-hook still exists Mar 23 23:53:17.026: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 23 23:53:17.030: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:17.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2177" for this suite. • [SLOW TEST:12.186 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:17.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 23 23:53:17.119: INFO: Waiting up to 5m0s for pod "pod-1c622014-9a55-420f-89f8-d5fa76e33672" in namespace "emptydir-4821" to be "Succeeded or Failed" Mar 23 23:53:17.135: INFO: Pod "pod-1c622014-9a55-420f-89f8-d5fa76e33672": Phase="Pending", Reason="", readiness=false. Elapsed: 16.063488ms Mar 23 23:53:19.142: INFO: Pod "pod-1c622014-9a55-420f-89f8-d5fa76e33672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023516239s Mar 23 23:53:21.146: INFO: Pod "pod-1c622014-9a55-420f-89f8-d5fa76e33672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027462998s STEP: Saw pod success Mar 23 23:53:21.146: INFO: Pod "pod-1c622014-9a55-420f-89f8-d5fa76e33672" satisfied condition "Succeeded or Failed" Mar 23 23:53:21.149: INFO: Trying to get logs from node latest-worker pod pod-1c622014-9a55-420f-89f8-d5fa76e33672 container test-container: STEP: delete the pod Mar 23 23:53:21.174: INFO: Waiting for pod pod-1c622014-9a55-420f-89f8-d5fa76e33672 to disappear Mar 23 23:53:21.178: INFO: Pod pod-1c622014-9a55-420f-89f8-d5fa76e33672 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:21.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4821" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1096,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:21.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:53:21.228: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:22.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1947" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":73,"skipped":1103,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:22.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 23 23:53:28.559: INFO: Pod pod-hostip-0f593b4d-337c-4957-bc17-c4cb41000e73 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:28.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1892" for this suite. • [SLOW TEST:6.111 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1141,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:28.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6940 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6940 I0323 23:53:28.680815 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6940, replica count: 2 I0323 23:53:31.731280 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:53:34.731539 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 23:53:34.731: INFO: Creating new exec pod Mar 23 23:53:39.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6940 execpodftwrm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 23 23:53:39.977: INFO: stderr: "I0323 23:53:39.882187 588 log.go:172] (0xc000a0c000) (0xc0009d4320) Create stream\nI0323 23:53:39.882266 588 log.go:172] (0xc000a0c000) (0xc0009d4320) Stream added, broadcasting: 1\nI0323 23:53:39.885774 588 log.go:172] (0xc000a0c000) Reply frame received for 1\nI0323 23:53:39.885839 588 log.go:172] (0xc000a0c000) (0xc000988000) Create stream\nI0323 23:53:39.885861 588 log.go:172] (0xc000a0c000) (0xc000988000) Stream added, broadcasting: 3\nI0323 23:53:39.886888 588 log.go:172] (0xc000a0c000) Reply frame received for 3\nI0323 23:53:39.886936 588 log.go:172] (0xc000a0c000) (0xc0009d43c0) Create stream\nI0323 23:53:39.886968 588 log.go:172] (0xc000a0c000) (0xc0009d43c0) Stream added, broadcasting: 5\nI0323 23:53:39.887980 588 log.go:172] (0xc000a0c000) Reply frame received for 5\nI0323 23:53:39.971618 588 log.go:172] (0xc000a0c000) Data frame received for 5\nI0323 23:53:39.971662 588 log.go:172] (0xc0009d43c0) (5) Data frame handling\nI0323 23:53:39.971672 588 log.go:172] (0xc0009d43c0) (5) Data frame sent\nI0323 23:53:39.971679 588 log.go:172] (0xc000a0c000) Data frame received for 5\nI0323 23:53:39.971685 588 log.go:172] (0xc0009d43c0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0323 23:53:39.971711 588 log.go:172] (0xc000a0c000) Data frame received for 3\nI0323 23:53:39.971721 588 log.go:172] (0xc000988000) (3) Data frame handling\nI0323 23:53:39.973444 588 log.go:172] (0xc000a0c000) Data frame received for 1\nI0323 23:53:39.973471 588 log.go:172] (0xc0009d4320) (1) Data frame handling\nI0323 23:53:39.973487 588 log.go:172] (0xc0009d4320) (1) Data frame sent\nI0323 23:53:39.973499 588 log.go:172] (0xc000a0c000) (0xc0009d4320) Stream removed, broadcasting: 1\nI0323 23:53:39.973510 588 log.go:172] (0xc000a0c000) Go away received\nI0323 23:53:39.974282 588 log.go:172] (0xc000a0c000) (0xc0009d4320) Stream removed, broadcasting: 1\nI0323 23:53:39.974311 588 log.go:172] (0xc000a0c000) (0xc000988000) Stream removed, broadcasting: 3\nI0323 23:53:39.974322 588 log.go:172] (0xc000a0c000) (0xc0009d43c0) Stream removed, broadcasting: 5\n" Mar 23 23:53:39.977: INFO: stdout: "" Mar 23 23:53:39.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6940 execpodftwrm -- /bin/sh -x -c nc -zv -t -w 2 10.96.117.171 80' Mar 23 23:53:40.200: INFO: stderr: "I0323 23:53:40.113291 605 log.go:172] (0xc0009bd080) (0xc0009a06e0) Create stream\nI0323 23:53:40.113356 605 log.go:172] (0xc0009bd080) (0xc0009a06e0) Stream added, broadcasting: 1\nI0323 23:53:40.117842 605 log.go:172] (0xc0009bd080) Reply frame received for 1\nI0323 23:53:40.117886 605 log.go:172] (0xc0009bd080) (0xc000851220) Create stream\nI0323 23:53:40.117913 605 log.go:172] (0xc0009bd080) (0xc000851220) Stream added, broadcasting: 3\nI0323 23:53:40.119045 605 log.go:172] (0xc0009bd080) Reply frame received for 3\nI0323 23:53:40.119083 605 log.go:172] (0xc0009bd080) (0xc0006df5e0) Create stream\nI0323 23:53:40.119094 605 log.go:172] (0xc0009bd080) (0xc0006df5e0) Stream added, broadcasting: 5\nI0323 23:53:40.119921 605 log.go:172] (0xc0009bd080) Reply frame received for 5\nI0323 23:53:40.193469 605 log.go:172] (0xc0009bd080) Data frame received for 5\nI0323 23:53:40.193506 605 log.go:172] (0xc0006df5e0) (5) Data frame handling\nI0323 23:53:40.193522 605 log.go:172] (0xc0006df5e0) (5) Data frame sent\nI0323 23:53:40.193534 605 log.go:172] (0xc0009bd080) Data frame received for 5\nI0323 23:53:40.193544 605 log.go:172] (0xc0006df5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.117.171 80\nConnection to 10.96.117.171 80 port [tcp/http] succeeded!\nI0323 23:53:40.193571 605 log.go:172] (0xc0009bd080) Data frame received for 3\nI0323 23:53:40.193584 605 log.go:172] (0xc000851220) (3) Data frame handling\nI0323 23:53:40.195387 605 log.go:172] (0xc0009bd080) Data frame received for 1\nI0323 23:53:40.195422 605 log.go:172] (0xc0009a06e0) (1) Data frame handling\nI0323 23:53:40.195455 605 log.go:172] (0xc0009a06e0) (1) Data frame sent\nI0323 23:53:40.195482 605 log.go:172] (0xc0009bd080) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0323 23:53:40.195734 605 log.go:172] (0xc0009bd080) Go away received\nI0323 23:53:40.196099 605 log.go:172] (0xc0009bd080) (0xc0009a06e0) Stream removed, broadcasting: 1\nI0323 23:53:40.196121 605 log.go:172] (0xc0009bd080) (0xc000851220) Stream removed, broadcasting: 3\nI0323 23:53:40.196137 605 log.go:172] (0xc0009bd080) (0xc0006df5e0) Stream removed, broadcasting: 5\n" Mar 23 23:53:40.200: INFO: stdout: "" Mar 23 23:53:40.200: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:40.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6940" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.681 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":75,"skipped":1153,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:40.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 23 23:53:40.288: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 23 23:53:40.306: INFO: Waiting for terminating namespaces to be deleted... Mar 23 23:53:40.309: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 23 23:53:40.314: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.314: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 23:53:40.314: INFO: externalname-service-2bxcz from services-6940 started at 2020-03-23 23:53:28 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.314: INFO: Container externalname-service ready: true, restart count 0 Mar 23 23:53:40.314: INFO: pod-hostip-0f593b4d-337c-4957-bc17-c4cb41000e73 from pods-1892 started at 2020-03-23 23:53:22 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.314: INFO: Container test ready: false, restart count 0 Mar 23 23:53:40.314: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.314: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 23:53:40.314: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 23 23:53:40.318: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.318: INFO: Container kindnet-cni ready: true, restart count 0 Mar 23 23:53:40.318: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.318: INFO: Container kube-proxy ready: true, restart count 0 Mar 23 23:53:40.318: INFO: execpodftwrm from services-6940 started at 2020-03-23 23:53:34 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.318: INFO: Container agnhost-pause ready: true, restart count 0 Mar 23 23:53:40.318: INFO: externalname-service-fttpq from services-6940 started at 2020-03-23 23:53:28 +0000 UTC (1 container statuses recorded) Mar 23 23:53:40.318: INFO: Container externalname-service ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-01019a23-eb19-416f-84de-e8c116c32255 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-01019a23-eb19-416f-84de-e8c116c32255 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-01019a23-eb19-416f-84de-e8c116c32255 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:53:56.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4090" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.281 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":76,"skipped":1166,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:53:56.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3 Mar 23 23:53:56.639: INFO: Pod name my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3: Found 0 pods out of 1 Mar 23 23:54:01.658: INFO: Pod name my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3: Found 1 pods out of 1 Mar 23 23:54:01.658: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3" are running Mar 23 23:54:01.661: INFO: Pod "my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3-j5wx2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:53:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:53:58 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:53:58 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-23 23:53:56 +0000 UTC Reason: Message:}]) Mar 23 23:54:01.661: INFO: Trying to dial the pod Mar 23 23:54:06.671: INFO: Controller my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3: Got expected result from replica 1 [my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3-j5wx2]: "my-hostname-basic-ed0d2ab8-b74c-454f-adbd-0ddc568c68c3-j5wx2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:06.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8291" for this suite. • [SLOW TEST:10.148 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":77,"skipped":1174,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:06.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7060" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":78,"skipped":1175,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:06.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 23 23:54:10.876: INFO: &Pod{ObjectMeta:{send-events-8b3d4e12-0691-423c-b994-80a18b303a19 events-392 /api/v1/namespaces/events-392/pods/send-events-8b3d4e12-0691-423c-b994-80a18b303a19 55c79764-5d96-497e-8a81-a5b08f2d66b8 2273683 0 2020-03-23 23:54:06 +0000 UTC map[name:foo time:840896793] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2vh5z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2vh5z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2vh5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:54:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:54:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:54:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:54:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.41,StartTime:2020-03-23 23:54:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:54:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://72331e16300a2a07029125f9f977a19927d90d0d0ec9e73c0ab0b194bfecdfcd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 23 23:54:12.880: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 23 23:54:14.885: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:14.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-392" for this suite. • [SLOW TEST:8.210 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":79,"skipped":1186,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:14.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-5934f4db-4c6a-46eb-8b9d-97fbb548c143 STEP: Creating a pod to test consume configMaps Mar 23 23:54:15.020: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf" in namespace "projected-5720" to be "Succeeded or Failed" Mar 23 23:54:15.023: INFO: Pod "pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.226803ms Mar 23 23:54:17.027: INFO: Pod "pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454873s Mar 23 23:54:19.032: INFO: Pod "pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011858076s STEP: Saw pod success Mar 23 23:54:19.032: INFO: Pod "pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf" satisfied condition "Succeeded or Failed" Mar 23 23:54:19.034: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf container projected-configmap-volume-test: STEP: delete the pod Mar 23 23:54:19.091: INFO: Waiting for pod pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf to disappear Mar 23 23:54:19.095: INFO: Pod pod-projected-configmaps-a39a3914-b826-445f-b26e-46d9ea958aaf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:19.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5720" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1199,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:19.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:35.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4750" for this suite. • [SLOW TEST:16.095 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":81,"skipped":1203,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:35.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:39.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2536" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":82,"skipped":1210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:39.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 23 23:54:39.491: INFO: Waiting up to 5m0s for pod "downward-api-19e57921-16c5-4022-91a4-098bca10311f" in namespace "downward-api-1680" to be "Succeeded or Failed" Mar 23 23:54:39.502: INFO: Pod "downward-api-19e57921-16c5-4022-91a4-098bca10311f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.624904ms Mar 23 23:54:41.505: INFO: Pod "downward-api-19e57921-16c5-4022-91a4-098bca10311f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014794966s Mar 23 23:54:43.515: INFO: Pod "downward-api-19e57921-16c5-4022-91a4-098bca10311f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023910089s STEP: Saw pod success Mar 23 23:54:43.515: INFO: Pod "downward-api-19e57921-16c5-4022-91a4-098bca10311f" satisfied condition "Succeeded or Failed" Mar 23 23:54:43.517: INFO: Trying to get logs from node latest-worker pod downward-api-19e57921-16c5-4022-91a4-098bca10311f container dapi-container: STEP: delete the pod Mar 23 23:54:43.534: INFO: Waiting for pod downward-api-19e57921-16c5-4022-91a4-098bca10311f to disappear Mar 23 23:54:43.538: INFO: Pod downward-api-19e57921-16c5-4022-91a4-098bca10311f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:43.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1680" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1253,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:43.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 23 23:54:48.763: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:48.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3935" for this suite. • [SLOW TEST:5.297 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":84,"skipped":1274,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:48.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2957/configmap-test-ac0fe452-be53-40ad-9d4d-a655c50cc21c STEP: Creating a pod to test consume configMaps Mar 23 23:54:48.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614" in namespace "configmap-2957" to be "Succeeded or Failed" Mar 23 23:54:48.978: INFO: Pod "pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614": Phase="Pending", Reason="", readiness=false. Elapsed: 5.865715ms Mar 23 23:54:51.084: INFO: Pod "pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112013134s Mar 23 23:54:53.087: INFO: Pod "pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115656305s STEP: Saw pod success Mar 23 23:54:53.087: INFO: Pod "pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614" satisfied condition "Succeeded or Failed" Mar 23 23:54:53.090: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614 container env-test: STEP: delete the pod Mar 23 23:54:53.111: INFO: Waiting for pod pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614 to disappear Mar 23 23:54:53.115: INFO: Pod pod-configmaps-c2c04777-bb80-4682-8407-1d0acaa21614 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:53.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2957" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1274,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:53.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:54:53.335: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1" in namespace "projected-9118" to be "Succeeded or Failed" Mar 23 23:54:53.348: INFO: Pod "downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.3626ms Mar 23 23:54:55.479: INFO: Pod "downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144468839s Mar 23 23:54:57.483: INFO: Pod "downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148916819s STEP: Saw pod success Mar 23 23:54:57.484: INFO: Pod "downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1" satisfied condition "Succeeded or Failed" Mar 23 23:54:57.487: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1 container client-container: STEP: delete the pod Mar 23 23:54:57.505: INFO: Waiting for pod downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1 to disappear Mar 23 23:54:57.509: INFO: Pod downwardapi-volume-b84830e8-fc3b-4417-b4fd-ad964592f1f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:54:57.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9118" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1278,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:54:57.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b84e1050-b802-42ba-8702-b0fd18d3b716 STEP: Creating a pod to test consume secrets Mar 23 23:54:57.587: INFO: Waiting up to 5m0s for pod "pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220" in namespace "secrets-1449" to be "Succeeded or Failed" Mar 23 23:54:57.611: INFO: Pod "pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220": Phase="Pending", Reason="", readiness=false. Elapsed: 23.567871ms Mar 23 23:54:59.615: INFO: Pod "pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027764832s Mar 23 23:55:01.619: INFO: Pod "pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031966629s STEP: Saw pod success Mar 23 23:55:01.619: INFO: Pod "pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220" satisfied condition "Succeeded or Failed" Mar 23 23:55:01.622: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220 container secret-volume-test: STEP: delete the pod Mar 23 23:55:01.664: INFO: Waiting for pod pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220 to disappear Mar 23 23:55:01.673: INFO: Pod pod-secrets-f1daff0f-fcd9-4bd3-9382-0cf24e049220 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:55:01.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1449" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1299,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:55:01.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:55:01.762: INFO: Waiting up to 5m0s for pod "busybox-user-65534-df757c0b-72f6-4889-8d2f-3a46797dacbc" in namespace "security-context-test-5157" to be "Succeeded or Failed" Mar 23 23:55:01.783: INFO: Pod "busybox-user-65534-df757c0b-72f6-4889-8d2f-3a46797dacbc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.183355ms Mar 23 23:55:03.787: INFO: Pod "busybox-user-65534-df757c0b-72f6-4889-8d2f-3a46797dacbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024462704s Mar 23 23:55:05.791: INFO: Pod "busybox-user-65534-df757c0b-72f6-4889-8d2f-3a46797dacbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028206984s Mar 23 23:55:05.791: INFO: Pod "busybox-user-65534-df757c0b-72f6-4889-8d2f-3a46797dacbc" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:55:05.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5157" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:55:05.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:55:05.911: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 23 23:55:10.928: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 23 23:55:10.928: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 23 23:55:12.932: INFO: Creating deployment "test-rollover-deployment" Mar 23 23:55:12.942: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 23 23:55:14.949: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 23 23:55:14.955: INFO: Ensure that both replica sets have 1 created replica Mar 23 23:55:14.961: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 23 23:55:14.967: INFO: Updating deployment test-rollover-deployment Mar 23 23:55:14.968: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 23 23:55:16.976: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 23 23:55:16.982: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 23 23:55:16.988: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:16.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604515, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:18.995: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:18.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604518, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:20.994: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:20.994: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604518, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:22.996: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:22.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604518, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:24.996: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:24.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604518, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:26.996: INFO: all replica sets need to contain the pod-template-hash label Mar 23 23:55:26.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604513, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604518, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604512, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 23 23:55:28.996: INFO: Mar 23 23:55:28.996: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 23 23:55:29.006: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2328 /apis/apps/v1/namespaces/deployment-2328/deployments/test-rollover-deployment cea9c4c7-55da-4875-a628-4fcb8c4b0581 2274277 2 2020-03-23 23:55:12 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c2b1f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-23 23:55:13 +0000 UTC,LastTransitionTime:2020-03-23 23:55:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-23 23:55:28 +0000 UTC,LastTransitionTime:2020-03-23 23:55:12 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 23 23:55:29.010: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-2328 /apis/apps/v1/namespaces/deployment-2328/replicasets/test-rollover-deployment-78df7bc796 b4e2d3f5-1ad4-4d24-b2e6-2eebeb71086c 2274265 2 2020-03-23 23:55:14 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment cea9c4c7-55da-4875-a628-4fcb8c4b0581 0xc004c2b6f7 0xc004c2b6f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c2b768 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 23 23:55:29.010: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 23 23:55:29.010: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2328 /apis/apps/v1/namespaces/deployment-2328/replicasets/test-rollover-controller f5dc5225-0257-4512-b6ae-13929d466960 2274275 2 2020-03-23 23:55:05 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment cea9c4c7-55da-4875-a628-4fcb8c4b0581 0xc004c2b60f 0xc004c2b620}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004c2b688 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 23:55:29.010: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2328 /apis/apps/v1/namespaces/deployment-2328/replicasets/test-rollover-deployment-f6c94f66c 6ba1a830-5217-4d7e-9a66-774565133ec5 2274213 2 2020-03-23 23:55:12 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment cea9c4c7-55da-4875-a628-4fcb8c4b0581 0xc004c2b7d0 0xc004c2b7d1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004c2b848 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 23 23:55:29.013: INFO: Pod "test-rollover-deployment-78df7bc796-247b2" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-247b2 test-rollover-deployment-78df7bc796- deployment-2328 /api/v1/namespaces/deployment-2328/pods/test-rollover-deployment-78df7bc796-247b2 b57d7c63-a14a-4fdf-8f20-457eb76a1dfd 2274233 0 2020-03-23 23:55:15 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 b4e2d3f5-1ad4-4d24-b2e6-2eebeb71086c 0xc004c2bdf7 0xc004c2bdf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kks6c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kks6c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kks6c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:55:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:55:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:55:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-23 23:55:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.169,StartTime:2020-03-23 23:55:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-23 23:55:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://4e531a1639503b950ef01b6d2486dfa35caa24386c12eb0f6fdf39dcdae3a7e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:55:29.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2328" for this suite. • [SLOW TEST:23.223 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":89,"skipped":1336,"failed":0} S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:55:29.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-1188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1188 to expose endpoints map[] Mar 23 23:55:29.237: INFO: successfully validated that service endpoint-test2 in namespace services-1188 exposes endpoints map[] (12.207098ms elapsed) STEP: Creating pod pod1 in namespace services-1188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1188 to expose endpoints map[pod1:[80]] Mar 23 23:55:32.339: INFO: successfully validated that service endpoint-test2 in namespace services-1188 exposes endpoints map[pod1:[80]] (3.093806673s elapsed) STEP: Creating pod pod2 in namespace services-1188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1188 to expose endpoints map[pod1:[80] pod2:[80]] Mar 23 23:55:35.439: INFO: successfully validated that service endpoint-test2 in namespace services-1188 exposes endpoints map[pod1:[80] pod2:[80]] (3.096199557s elapsed) STEP: Deleting pod pod1 in namespace services-1188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1188 to expose endpoints map[pod2:[80]] Mar 23 23:55:36.506: INFO: successfully validated that service endpoint-test2 in namespace services-1188 exposes endpoints map[pod2:[80]] (1.062113886s elapsed) STEP: Deleting pod pod2 in namespace services-1188 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1188 to expose endpoints map[] Mar 23 23:55:37.517: INFO: successfully validated that service endpoint-test2 in namespace services-1188 exposes endpoints map[] (1.007520198s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:55:37.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1188" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.558 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":90,"skipped":1337,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:55:37.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 23 23:55:42.264: INFO: Successfully updated pod "pod-update-26784f50-d996-4623-819b-e0d16290270c" STEP: verifying the updated pod is in kubernetes Mar 23 23:55:42.308: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:55:42.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5574" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:55:42.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7287 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7287 STEP: Creating statefulset with conflicting port in namespace statefulset-7287 STEP: Waiting until pod test-pod will start running in namespace statefulset-7287 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7287 Mar 23 23:55:46.401: INFO: Observed stateful pod in namespace: statefulset-7287, name: ss-0, uid: b8ebdec8-d04e-441c-955e-8f6d75588a63, status phase: Failed. Waiting for statefulset controller to delete. Mar 23 23:55:46.453: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7287 STEP: Removing pod with conflicting port in namespace statefulset-7287 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7287 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 23 23:55:50.569: INFO: Deleting all statefulset in ns statefulset-7287 Mar 23 23:55:50.572: INFO: Scaling statefulset ss to 0 Mar 23 23:56:10.589: INFO: Waiting for statefulset status.replicas updated to 0 Mar 23 23:56:10.592: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:56:10.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7287" for this suite. • [SLOW TEST:28.300 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":92,"skipped":1375,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:56:10.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 23:56:10.655: INFO: PodSpec: initContainers in spec.initContainers Mar 23 23:56:59.590: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-ac27b9f5-096c-41b9-8f62-cb3d78ebb54b", GenerateName:"", Namespace:"init-container-618", SelfLink:"/api/v1/namespaces/init-container-618/pods/pod-init-ac27b9f5-096c-41b9-8f62-cb3d78ebb54b", UID:"44454136-7d8a-43b5-802a-d4faf796f2f8", ResourceVersion:"2274818", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720604570, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"655761115"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-hpwm6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001ce8e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hpwm6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hpwm6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-hpwm6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002594518), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d8e150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002594660)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0025946f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0025946f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0025946fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604570, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604570, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.2.172", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.172"}}, StartTime:(*v1.Time)(0xc002194340), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d8e2a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d8e310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://66f6289150deca6b3ddfbc99dacdcf00ee6f977364c1f90724970b8d01011b22", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002194480), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002194360), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00259478f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:56:59.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-618" for this suite. • [SLOW TEST:49.019 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":93,"skipped":1378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:56:59.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8680.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8680.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 23 23:57:05.764: INFO: DNS probes using dns-8680/dns-test-d1edd911-77ff-4eb4-877d-4d6905d11f2b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:05.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8680" for this suite. • [SLOW TEST:6.230 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":94,"skipped":1406,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:05.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:57:06.186: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 38.030529ms) Mar 23 23:57:06.192: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 6.094444ms) Mar 23 23:57:06.195: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.543488ms) Mar 23 23:57:06.199: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.186539ms) Mar 23 23:57:06.202: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.854526ms) Mar 23 23:57:06.206: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.52711ms) Mar 23 23:57:06.210: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.596255ms) Mar 23 23:57:06.213: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.405024ms) Mar 23 23:57:06.216: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.101496ms) Mar 23 23:57:06.220: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.415873ms) Mar 23 23:57:06.223: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.567849ms) Mar 23 23:57:06.228: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 4.468038ms) Mar 23 23:57:06.231: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.68661ms) Mar 23 23:57:06.234: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.695008ms) Mar 23 23:57:06.237: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.667419ms) Mar 23 23:57:06.278: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 41.117648ms) Mar 23 23:57:06.282: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.585403ms) Mar 23 23:57:06.285: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.890828ms) Mar 23 23:57:06.288: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.971135ms) Mar 23 23:57:06.291: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.827893ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:06.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4639" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":95,"skipped":1427,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:06.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:57:06.347: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-1146 I0323 23:57:06.365967 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1146, replica count: 1 I0323 23:57:07.416445 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:57:08.416708 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:57:09.416958 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0323 23:57:10.417296 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 23 23:57:10.545: INFO: Created: latency-svc-8994l Mar 23 23:57:10.581: INFO: Got endpoints: latency-svc-8994l [63.512645ms] Mar 23 23:57:10.622: INFO: Created: latency-svc-wcklk Mar 23 23:57:10.631: INFO: Got endpoints: latency-svc-wcklk [50.491757ms] Mar 23 23:57:10.653: INFO: Created: latency-svc-zbz9v Mar 23 23:57:10.662: INFO: Got endpoints: latency-svc-zbz9v [80.76091ms] Mar 23 23:57:10.703: INFO: Created: latency-svc-9gddx Mar 23 23:57:10.709: INFO: Got endpoints: latency-svc-9gddx [127.91213ms] Mar 23 23:57:10.734: INFO: Created: latency-svc-2mc59 Mar 23 23:57:10.746: INFO: Got endpoints: latency-svc-2mc59 [163.773247ms] Mar 23 23:57:10.767: INFO: Created: latency-svc-cst2k Mar 23 23:57:10.782: INFO: Got endpoints: latency-svc-cst2k [199.989155ms] Mar 23 23:57:10.828: INFO: Created: latency-svc-mrjpw Mar 23 23:57:10.848: INFO: Got endpoints: latency-svc-mrjpw [266.191343ms] Mar 23 23:57:10.848: INFO: Created: latency-svc-7ftgr Mar 23 23:57:10.860: INFO: Got endpoints: latency-svc-7ftgr [277.668631ms] Mar 23 23:57:10.878: INFO: Created: latency-svc-7fjxw Mar 23 23:57:10.890: INFO: Got endpoints: latency-svc-7fjxw [307.714349ms] Mar 23 23:57:10.926: INFO: Created: latency-svc-hdd5t Mar 23 23:57:10.978: INFO: Got endpoints: latency-svc-hdd5t [396.376166ms] Mar 23 23:57:10.989: INFO: Created: latency-svc-mstw4 Mar 23 23:57:11.006: INFO: Got endpoints: latency-svc-mstw4 [424.009784ms] Mar 23 23:57:11.055: INFO: Created: latency-svc-cvg6w Mar 23 23:57:11.104: INFO: Got endpoints: latency-svc-cvg6w [522.129312ms] Mar 23 23:57:11.123: INFO: Created: latency-svc-4jcsp Mar 23 23:57:11.139: INFO: Got endpoints: latency-svc-4jcsp [556.51135ms] Mar 23 23:57:11.153: INFO: Created: latency-svc-8n4vx Mar 23 23:57:11.168: INFO: Got endpoints: latency-svc-8n4vx [585.695017ms] Mar 23 23:57:11.184: INFO: Created: latency-svc-fncjf Mar 23 23:57:11.198: INFO: Got endpoints: latency-svc-fncjf [616.678499ms] Mar 23 23:57:11.229: INFO: Created: latency-svc-rtpwq Mar 23 23:57:11.246: INFO: Got endpoints: latency-svc-rtpwq [663.585994ms] Mar 23 23:57:11.283: INFO: Created: latency-svc-747l2 Mar 23 23:57:11.298: INFO: Got endpoints: latency-svc-747l2 [666.9643ms] Mar 23 23:57:11.319: INFO: Created: latency-svc-2tjcd Mar 23 23:57:11.368: INFO: Got endpoints: latency-svc-2tjcd [705.905005ms] Mar 23 23:57:11.369: INFO: Created: latency-svc-sb94j Mar 23 23:57:11.375: INFO: Got endpoints: latency-svc-sb94j [665.530668ms] Mar 23 23:57:11.424: INFO: Created: latency-svc-2bb5z Mar 23 23:57:11.440: INFO: Got endpoints: latency-svc-2bb5z [694.916672ms] Mar 23 23:57:11.511: INFO: Created: latency-svc-nvs7q Mar 23 23:57:11.535: INFO: Got endpoints: latency-svc-nvs7q [752.973469ms] Mar 23 23:57:11.568: INFO: Created: latency-svc-sxcrt Mar 23 23:57:11.584: INFO: Got endpoints: latency-svc-sxcrt [736.27401ms] Mar 23 23:57:11.610: INFO: Created: latency-svc-dk7mk Mar 23 23:57:11.637: INFO: Got endpoints: latency-svc-dk7mk [777.39458ms] Mar 23 23:57:11.658: INFO: Created: latency-svc-r4qxr Mar 23 23:57:11.675: INFO: Got endpoints: latency-svc-r4qxr [785.026617ms] Mar 23 23:57:11.703: INFO: Created: latency-svc-7gchw Mar 23 23:57:11.734: INFO: Got endpoints: latency-svc-7gchw [755.375987ms] Mar 23 23:57:11.801: INFO: Created: latency-svc-nfhkn Mar 23 23:57:11.837: INFO: Got endpoints: latency-svc-nfhkn [831.273124ms] Mar 23 23:57:11.862: INFO: Created: latency-svc-hmhgx Mar 23 23:57:11.875: INFO: Got endpoints: latency-svc-hmhgx [770.287806ms] Mar 23 23:57:11.919: INFO: Created: latency-svc-vm7gb Mar 23 23:57:11.923: INFO: Got endpoints: latency-svc-vm7gb [783.964313ms] Mar 23 23:57:11.973: INFO: Created: latency-svc-dngjr Mar 23 23:57:11.988: INFO: Got endpoints: latency-svc-dngjr [820.629893ms] Mar 23 23:57:12.015: INFO: Created: latency-svc-v6w94 Mar 23 23:57:12.050: INFO: Got endpoints: latency-svc-v6w94 [851.989162ms] Mar 23 23:57:12.071: INFO: Created: latency-svc-jgcxq Mar 23 23:57:12.084: INFO: Got endpoints: latency-svc-jgcxq [838.717339ms] Mar 23 23:57:12.138: INFO: Created: latency-svc-6tl5n Mar 23 23:57:12.194: INFO: Got endpoints: latency-svc-6tl5n [895.113047ms] Mar 23 23:57:12.219: INFO: Created: latency-svc-jxfxd Mar 23 23:57:12.232: INFO: Got endpoints: latency-svc-jxfxd [863.877209ms] Mar 23 23:57:12.249: INFO: Created: latency-svc-tnz2k Mar 23 23:57:12.267: INFO: Got endpoints: latency-svc-tnz2k [892.358889ms] Mar 23 23:57:12.363: INFO: Created: latency-svc-g6zdj Mar 23 23:57:12.375: INFO: Got endpoints: latency-svc-g6zdj [934.385472ms] Mar 23 23:57:12.401: INFO: Created: latency-svc-cmdbw Mar 23 23:57:12.411: INFO: Got endpoints: latency-svc-cmdbw [876.225967ms] Mar 23 23:57:12.487: INFO: Created: latency-svc-xbcdg Mar 23 23:57:12.510: INFO: Created: latency-svc-mdhr6 Mar 23 23:57:12.510: INFO: Got endpoints: latency-svc-xbcdg [925.63953ms] Mar 23 23:57:12.522: INFO: Got endpoints: latency-svc-mdhr6 [884.588781ms] Mar 23 23:57:12.546: INFO: Created: latency-svc-lfz7x Mar 23 23:57:12.559: INFO: Got endpoints: latency-svc-lfz7x [884.374216ms] Mar 23 23:57:12.579: INFO: Created: latency-svc-7klpf Mar 23 23:57:12.613: INFO: Got endpoints: latency-svc-7klpf [878.940001ms] Mar 23 23:57:12.633: INFO: Created: latency-svc-thrzj Mar 23 23:57:12.654: INFO: Got endpoints: latency-svc-thrzj [816.255573ms] Mar 23 23:57:12.681: INFO: Created: latency-svc-z2xnd Mar 23 23:57:12.695: INFO: Got endpoints: latency-svc-z2xnd [820.91819ms] Mar 23 23:57:12.745: INFO: Created: latency-svc-8tt8g Mar 23 23:57:12.761: INFO: Got endpoints: latency-svc-8tt8g [838.475082ms] Mar 23 23:57:12.798: INFO: Created: latency-svc-zh4qj Mar 23 23:57:12.821: INFO: Got endpoints: latency-svc-zh4qj [832.934885ms] Mar 23 23:57:12.882: INFO: Created: latency-svc-cbf48 Mar 23 23:57:12.897: INFO: Got endpoints: latency-svc-cbf48 [846.698993ms] Mar 23 23:57:12.927: INFO: Created: latency-svc-6n8c2 Mar 23 23:57:12.945: INFO: Got endpoints: latency-svc-6n8c2 [859.993107ms] Mar 23 23:57:12.965: INFO: Created: latency-svc-cr5d2 Mar 23 23:57:12.980: INFO: Got endpoints: latency-svc-cr5d2 [786.431869ms] Mar 23 23:57:13.021: INFO: Created: latency-svc-m6pqv Mar 23 23:57:13.028: INFO: Got endpoints: latency-svc-m6pqv [796.461495ms] Mar 23 23:57:13.049: INFO: Created: latency-svc-6bhzm Mar 23 23:57:13.095: INFO: Got endpoints: latency-svc-6bhzm [827.890234ms] Mar 23 23:57:13.158: INFO: Created: latency-svc-pv7nb Mar 23 23:57:13.187: INFO: Created: latency-svc-wp45k Mar 23 23:57:13.188: INFO: Got endpoints: latency-svc-pv7nb [812.568868ms] Mar 23 23:57:13.203: INFO: Got endpoints: latency-svc-wp45k [791.581236ms] Mar 23 23:57:13.223: INFO: Created: latency-svc-jpw4q Mar 23 23:57:13.234: INFO: Got endpoints: latency-svc-jpw4q [724.548119ms] Mar 23 23:57:13.247: INFO: Created: latency-svc-x676d Mar 23 23:57:13.325: INFO: Got endpoints: latency-svc-x676d [803.553375ms] Mar 23 23:57:13.340: INFO: Created: latency-svc-tbj5c Mar 23 23:57:13.355: INFO: Got endpoints: latency-svc-tbj5c [795.756864ms] Mar 23 23:57:13.376: INFO: Created: latency-svc-crblw Mar 23 23:57:13.391: INFO: Got endpoints: latency-svc-crblw [777.757593ms] Mar 23 23:57:13.409: INFO: Created: latency-svc-sb4ft Mar 23 23:57:13.445: INFO: Got endpoints: latency-svc-sb4ft [791.200789ms] Mar 23 23:57:13.463: INFO: Created: latency-svc-jxlp9 Mar 23 23:57:13.474: INFO: Got endpoints: latency-svc-jxlp9 [778.711816ms] Mar 23 23:57:13.506: INFO: Created: latency-svc-2s8ph Mar 23 23:57:13.516: INFO: Got endpoints: latency-svc-2s8ph [754.931415ms] Mar 23 23:57:13.544: INFO: Created: latency-svc-bgplx Mar 23 23:57:13.571: INFO: Got endpoints: latency-svc-bgplx [749.055972ms] Mar 23 23:57:13.604: INFO: Created: latency-svc-rxrms Mar 23 23:57:13.615: INFO: Got endpoints: latency-svc-rxrms [718.166914ms] Mar 23 23:57:13.661: INFO: Created: latency-svc-7jzmf Mar 23 23:57:13.691: INFO: Got endpoints: latency-svc-7jzmf [746.04202ms] Mar 23 23:57:13.709: INFO: Created: latency-svc-rdpvl Mar 23 23:57:13.723: INFO: Got endpoints: latency-svc-rdpvl [742.821617ms] Mar 23 23:57:13.760: INFO: Created: latency-svc-xq9j5 Mar 23 23:57:13.783: INFO: Got endpoints: latency-svc-xq9j5 [754.892472ms] Mar 23 23:57:13.853: INFO: Created: latency-svc-8m6pm Mar 23 23:57:13.861: INFO: Got endpoints: latency-svc-8m6pm [765.645058ms] Mar 23 23:57:13.883: INFO: Created: latency-svc-lhpx8 Mar 23 23:57:13.897: INFO: Got endpoints: latency-svc-lhpx8 [709.179786ms] Mar 23 23:57:13.931: INFO: Created: latency-svc-6scs9 Mar 23 23:57:14.547: INFO: Got endpoints: latency-svc-6scs9 [1.344370533s] Mar 23 23:57:14.961: INFO: Created: latency-svc-bjz7t Mar 23 23:57:14.980: INFO: Created: latency-svc-nt7lz Mar 23 23:57:14.980: INFO: Got endpoints: latency-svc-bjz7t [1.745797069s] Mar 23 23:57:15.015: INFO: Got endpoints: latency-svc-nt7lz [1.690185861s] Mar 23 23:57:15.053: INFO: Created: latency-svc-hfrql Mar 23 23:57:15.080: INFO: Got endpoints: latency-svc-hfrql [1.724681975s] Mar 23 23:57:15.102: INFO: Created: latency-svc-ktzs7 Mar 23 23:57:15.115: INFO: Got endpoints: latency-svc-ktzs7 [1.724464154s] Mar 23 23:57:15.133: INFO: Created: latency-svc-m98nt Mar 23 23:57:15.146: INFO: Got endpoints: latency-svc-m98nt [1.700857968s] Mar 23 23:57:15.163: INFO: Created: latency-svc-w4w5k Mar 23 23:57:15.175: INFO: Got endpoints: latency-svc-w4w5k [1.701175216s] Mar 23 23:57:15.217: INFO: Created: latency-svc-bx4ck Mar 23 23:57:15.244: INFO: Created: latency-svc-jk67c Mar 23 23:57:15.244: INFO: Got endpoints: latency-svc-bx4ck [1.727919521s] Mar 23 23:57:15.285: INFO: Got endpoints: latency-svc-jk67c [1.714839049s] Mar 23 23:57:15.312: INFO: Created: latency-svc-8cw5l Mar 23 23:57:15.337: INFO: Got endpoints: latency-svc-8cw5l [1.722246723s] Mar 23 23:57:15.367: INFO: Created: latency-svc-d9hnq Mar 23 23:57:15.376: INFO: Got endpoints: latency-svc-d9hnq [1.685201553s] Mar 23 23:57:15.393: INFO: Created: latency-svc-5zw9h Mar 23 23:57:15.400: INFO: Got endpoints: latency-svc-5zw9h [1.676577883s] Mar 23 23:57:15.418: INFO: Created: latency-svc-rdqmr Mar 23 23:57:15.430: INFO: Got endpoints: latency-svc-rdqmr [1.646567974s] Mar 23 23:57:15.481: INFO: Created: latency-svc-n56lm Mar 23 23:57:15.487: INFO: Got endpoints: latency-svc-n56lm [1.625643656s] Mar 23 23:57:15.505: INFO: Created: latency-svc-mt8qb Mar 23 23:57:15.517: INFO: Got endpoints: latency-svc-mt8qb [1.619803666s] Mar 23 23:57:15.534: INFO: Created: latency-svc-tmmh5 Mar 23 23:57:15.547: INFO: Got endpoints: latency-svc-tmmh5 [999.417925ms] Mar 23 23:57:15.565: INFO: Created: latency-svc-9kd5b Mar 23 23:57:15.576: INFO: Got endpoints: latency-svc-9kd5b [596.047608ms] Mar 23 23:57:15.625: INFO: Created: latency-svc-ggzdc Mar 23 23:57:15.630: INFO: Got endpoints: latency-svc-ggzdc [614.97538ms] Mar 23 23:57:15.652: INFO: Created: latency-svc-k6x4g Mar 23 23:57:15.660: INFO: Got endpoints: latency-svc-k6x4g [580.659471ms] Mar 23 23:57:15.681: INFO: Created: latency-svc-rqv9m Mar 23 23:57:15.711: INFO: Got endpoints: latency-svc-rqv9m [596.02063ms] Mar 23 23:57:15.780: INFO: Created: latency-svc-lx9q5 Mar 23 23:57:15.799: INFO: Created: latency-svc-4g6bz Mar 23 23:57:15.800: INFO: Got endpoints: latency-svc-lx9q5 [653.768418ms] Mar 23 23:57:15.808: INFO: Got endpoints: latency-svc-4g6bz [632.793084ms] Mar 23 23:57:15.826: INFO: Created: latency-svc-xw8kk Mar 23 23:57:15.856: INFO: Got endpoints: latency-svc-xw8kk [611.493296ms] Mar 23 23:57:15.874: INFO: Created: latency-svc-d5vmh Mar 23 23:57:15.924: INFO: Got endpoints: latency-svc-d5vmh [638.709655ms] Mar 23 23:57:15.926: INFO: Created: latency-svc-7stzd Mar 23 23:57:15.933: INFO: Got endpoints: latency-svc-7stzd [595.792423ms] Mar 23 23:57:15.955: INFO: Created: latency-svc-n4whw Mar 23 23:57:15.963: INFO: Got endpoints: latency-svc-n4whw [587.019343ms] Mar 23 23:57:15.988: INFO: Created: latency-svc-669wv Mar 23 23:57:15.999: INFO: Got endpoints: latency-svc-669wv [599.608381ms] Mar 23 23:57:16.062: INFO: Created: latency-svc-8b5w5 Mar 23 23:57:16.084: INFO: Got endpoints: latency-svc-8b5w5 [653.935083ms] Mar 23 23:57:16.085: INFO: Created: latency-svc-54h8r Mar 23 23:57:16.105: INFO: Got endpoints: latency-svc-54h8r [618.499526ms] Mar 23 23:57:16.136: INFO: Created: latency-svc-tvfzh Mar 23 23:57:16.158: INFO: Got endpoints: latency-svc-tvfzh [641.449086ms] Mar 23 23:57:16.206: INFO: Created: latency-svc-cxlsq Mar 23 23:57:16.212: INFO: Got endpoints: latency-svc-cxlsq [664.863261ms] Mar 23 23:57:16.252: INFO: Created: latency-svc-v5zcw Mar 23 23:57:16.285: INFO: Got endpoints: latency-svc-v5zcw [708.254611ms] Mar 23 23:57:16.361: INFO: Created: latency-svc-2jkcr Mar 23 23:57:16.368: INFO: Got endpoints: latency-svc-2jkcr [737.068288ms] Mar 23 23:57:16.423: INFO: Created: latency-svc-t7b22 Mar 23 23:57:16.446: INFO: Got endpoints: latency-svc-t7b22 [785.325939ms] Mar 23 23:57:16.488: INFO: Created: latency-svc-ccdlw Mar 23 23:57:16.503: INFO: Got endpoints: latency-svc-ccdlw [792.24342ms] Mar 23 23:57:16.504: INFO: Created: latency-svc-xnstz Mar 23 23:57:16.520: INFO: Got endpoints: latency-svc-xnstz [720.786414ms] Mar 23 23:57:16.540: INFO: Created: latency-svc-s985j Mar 23 23:57:16.556: INFO: Got endpoints: latency-svc-s985j [748.027564ms] Mar 23 23:57:16.579: INFO: Created: latency-svc-tmjdk Mar 23 23:57:16.607: INFO: Got endpoints: latency-svc-tmjdk [751.115601ms] Mar 23 23:57:16.633: INFO: Created: latency-svc-lbtcz Mar 23 23:57:16.646: INFO: Got endpoints: latency-svc-lbtcz [721.915801ms] Mar 23 23:57:16.666: INFO: Created: latency-svc-pk8cx Mar 23 23:57:16.683: INFO: Got endpoints: latency-svc-pk8cx [749.583545ms] Mar 23 23:57:16.702: INFO: Created: latency-svc-9m2pg Mar 23 23:57:16.739: INFO: Got endpoints: latency-svc-9m2pg [775.604068ms] Mar 23 23:57:16.771: INFO: Created: latency-svc-gvmrk Mar 23 23:57:16.787: INFO: Got endpoints: latency-svc-gvmrk [787.637066ms] Mar 23 23:57:16.813: INFO: Created: latency-svc-4x4d5 Mar 23 23:57:16.829: INFO: Got endpoints: latency-svc-4x4d5 [745.673732ms] Mar 23 23:57:16.882: INFO: Created: latency-svc-l47lq Mar 23 23:57:16.888: INFO: Got endpoints: latency-svc-l47lq [783.170342ms] Mar 23 23:57:16.923: INFO: Created: latency-svc-22tth Mar 23 23:57:16.937: INFO: Got endpoints: latency-svc-22tth [778.993253ms] Mar 23 23:57:16.953: INFO: Created: latency-svc-klvf5 Mar 23 23:57:16.966: INFO: Got endpoints: latency-svc-klvf5 [754.831358ms] Mar 23 23:57:17.008: INFO: Created: latency-svc-v449n Mar 23 23:57:17.014: INFO: Got endpoints: latency-svc-v449n [729.452092ms] Mar 23 23:57:17.034: INFO: Created: latency-svc-zbgzb Mar 23 23:57:17.050: INFO: Got endpoints: latency-svc-zbgzb [682.701439ms] Mar 23 23:57:17.071: INFO: Created: latency-svc-x8zp8 Mar 23 23:57:17.084: INFO: Got endpoints: latency-svc-x8zp8 [637.813261ms] Mar 23 23:57:17.101: INFO: Created: latency-svc-rnvlj Mar 23 23:57:17.146: INFO: Got endpoints: latency-svc-rnvlj [642.068306ms] Mar 23 23:57:17.148: INFO: Created: latency-svc-svkpm Mar 23 23:57:17.155: INFO: Got endpoints: latency-svc-svkpm [634.848007ms] Mar 23 23:57:17.176: INFO: Created: latency-svc-tpvlv Mar 23 23:57:17.192: INFO: Got endpoints: latency-svc-tpvlv [635.474415ms] Mar 23 23:57:17.215: INFO: Created: latency-svc-fvr9h Mar 23 23:57:17.239: INFO: Got endpoints: latency-svc-fvr9h [631.860335ms] Mar 23 23:57:17.287: INFO: Created: latency-svc-4cfl2 Mar 23 23:57:17.300: INFO: Got endpoints: latency-svc-4cfl2 [653.266247ms] Mar 23 23:57:17.325: INFO: Created: latency-svc-qvfjb Mar 23 23:57:17.342: INFO: Got endpoints: latency-svc-qvfjb [658.547147ms] Mar 23 23:57:17.361: INFO: Created: latency-svc-d57tn Mar 23 23:57:17.421: INFO: Got endpoints: latency-svc-d57tn [682.320989ms] Mar 23 23:57:17.422: INFO: Created: latency-svc-dt66g Mar 23 23:57:17.427: INFO: Got endpoints: latency-svc-dt66g [640.441114ms] Mar 23 23:57:17.450: INFO: Created: latency-svc-25k6z Mar 23 23:57:17.464: INFO: Got endpoints: latency-svc-25k6z [634.182431ms] Mar 23 23:57:17.485: INFO: Created: latency-svc-dtfxv Mar 23 23:57:17.500: INFO: Got endpoints: latency-svc-dtfxv [611.704528ms] Mar 23 23:57:17.518: INFO: Created: latency-svc-v47ks Mar 23 23:57:17.559: INFO: Got endpoints: latency-svc-v47ks [621.545015ms] Mar 23 23:57:17.560: INFO: Created: latency-svc-qtk2v Mar 23 23:57:17.584: INFO: Got endpoints: latency-svc-qtk2v [617.567967ms] Mar 23 23:57:17.607: INFO: Created: latency-svc-v8td5 Mar 23 23:57:17.623: INFO: Got endpoints: latency-svc-v8td5 [608.311667ms] Mar 23 23:57:17.640: INFO: Created: latency-svc-g44s8 Mar 23 23:57:17.653: INFO: Got endpoints: latency-svc-g44s8 [602.115043ms] Mar 23 23:57:17.691: INFO: Created: latency-svc-q4jb9 Mar 23 23:57:17.695: INFO: Got endpoints: latency-svc-q4jb9 [611.028789ms] Mar 23 23:57:17.715: INFO: Created: latency-svc-m7994 Mar 23 23:57:17.731: INFO: Got endpoints: latency-svc-m7994 [584.880131ms] Mar 23 23:57:17.751: INFO: Created: latency-svc-f4ztn Mar 23 23:57:17.767: INFO: Got endpoints: latency-svc-f4ztn [611.30237ms] Mar 23 23:57:17.822: INFO: Created: latency-svc-znlv6 Mar 23 23:57:17.839: INFO: Created: latency-svc-q7b6j Mar 23 23:57:17.839: INFO: Got endpoints: latency-svc-znlv6 [647.165819ms] Mar 23 23:57:17.863: INFO: Got endpoints: latency-svc-q7b6j [623.867169ms] Mar 23 23:57:17.893: INFO: Created: latency-svc-2q2wk Mar 23 23:57:17.907: INFO: Got endpoints: latency-svc-2q2wk [607.735002ms] Mar 23 23:57:17.955: INFO: Created: latency-svc-qhvqf Mar 23 23:57:17.973: INFO: Got endpoints: latency-svc-qhvqf [631.872033ms] Mar 23 23:57:17.975: INFO: Created: latency-svc-4ghk2 Mar 23 23:57:17.985: INFO: Got endpoints: latency-svc-4ghk2 [564.106302ms] Mar 23 23:57:18.009: INFO: Created: latency-svc-j7tv5 Mar 23 23:57:18.021: INFO: Got endpoints: latency-svc-j7tv5 [593.852059ms] Mar 23 23:57:18.042: INFO: Created: latency-svc-rbc9w Mar 23 23:57:18.086: INFO: Got endpoints: latency-svc-rbc9w [622.213134ms] Mar 23 23:57:18.108: INFO: Created: latency-svc-lb9bk Mar 23 23:57:18.123: INFO: Got endpoints: latency-svc-lb9bk [622.609574ms] Mar 23 23:57:18.153: INFO: Created: latency-svc-vk5nj Mar 23 23:57:18.178: INFO: Got endpoints: latency-svc-vk5nj [618.949724ms] Mar 23 23:57:18.224: INFO: Created: latency-svc-v5m7v Mar 23 23:57:18.247: INFO: Got endpoints: latency-svc-v5m7v [662.486586ms] Mar 23 23:57:18.283: INFO: Created: latency-svc-bq9w6 Mar 23 23:57:18.294: INFO: Got endpoints: latency-svc-bq9w6 [670.955244ms] Mar 23 23:57:18.312: INFO: Created: latency-svc-sk6hd Mar 23 23:57:18.338: INFO: Got endpoints: latency-svc-sk6hd [685.58227ms] Mar 23 23:57:18.351: INFO: Created: latency-svc-snw2j Mar 23 23:57:18.382: INFO: Got endpoints: latency-svc-snw2j [687.07258ms] Mar 23 23:57:18.411: INFO: Created: latency-svc-94hzh Mar 23 23:57:18.426: INFO: Got endpoints: latency-svc-94hzh [695.420482ms] Mar 23 23:57:18.469: INFO: Created: latency-svc-5n7h4 Mar 23 23:57:18.474: INFO: Got endpoints: latency-svc-5n7h4 [707.097489ms] Mar 23 23:57:18.498: INFO: Created: latency-svc-8zgkx Mar 23 23:57:18.517: INFO: Got endpoints: latency-svc-8zgkx [677.403087ms] Mar 23 23:57:18.546: INFO: Created: latency-svc-npg66 Mar 23 23:57:18.560: INFO: Got endpoints: latency-svc-npg66 [697.374885ms] Mar 23 23:57:18.615: INFO: Created: latency-svc-phjdt Mar 23 23:57:18.627: INFO: Got endpoints: latency-svc-phjdt [719.293152ms] Mar 23 23:57:18.640: INFO: Created: latency-svc-csbk4 Mar 23 23:57:18.651: INFO: Got endpoints: latency-svc-csbk4 [677.411632ms] Mar 23 23:57:18.672: INFO: Created: latency-svc-ksbf4 Mar 23 23:57:18.686: INFO: Got endpoints: latency-svc-ksbf4 [701.269392ms] Mar 23 23:57:18.732: INFO: Created: latency-svc-xjvzx Mar 23 23:57:18.750: INFO: Created: latency-svc-vlsw6 Mar 23 23:57:18.751: INFO: Got endpoints: latency-svc-xjvzx [729.123506ms] Mar 23 23:57:18.777: INFO: Got endpoints: latency-svc-vlsw6 [690.819913ms] Mar 23 23:57:18.813: INFO: Created: latency-svc-wcl6x Mar 23 23:57:18.824: INFO: Got endpoints: latency-svc-wcl6x [701.375649ms] Mar 23 23:57:18.870: INFO: Created: latency-svc-86blk Mar 23 23:57:18.888: INFO: Created: latency-svc-2mkg7 Mar 23 23:57:18.888: INFO: Got endpoints: latency-svc-86blk [710.340424ms] Mar 23 23:57:18.912: INFO: Got endpoints: latency-svc-2mkg7 [665.688023ms] Mar 23 23:57:18.950: INFO: Created: latency-svc-qchcj Mar 23 23:57:18.959: INFO: Got endpoints: latency-svc-qchcj [665.168957ms] Mar 23 23:57:18.996: INFO: Created: latency-svc-fkvv6 Mar 23 23:57:19.011: INFO: Got endpoints: latency-svc-fkvv6 [672.861459ms] Mar 23 23:57:19.011: INFO: Created: latency-svc-qhvbq Mar 23 23:57:19.025: INFO: Got endpoints: latency-svc-qhvbq [643.2119ms] Mar 23 23:57:19.041: INFO: Created: latency-svc-6whcd Mar 23 23:57:19.055: INFO: Got endpoints: latency-svc-6whcd [628.826466ms] Mar 23 23:57:19.074: INFO: Created: latency-svc-sprtj Mar 23 23:57:19.091: INFO: Got endpoints: latency-svc-sprtj [616.697792ms] Mar 23 23:57:19.134: INFO: Created: latency-svc-mml7j Mar 23 23:57:19.141: INFO: Got endpoints: latency-svc-mml7j [624.755174ms] Mar 23 23:57:19.159: INFO: Created: latency-svc-pzb6w Mar 23 23:57:19.172: INFO: Got endpoints: latency-svc-pzb6w [611.680012ms] Mar 23 23:57:19.195: INFO: Created: latency-svc-tqmvk Mar 23 23:57:19.208: INFO: Got endpoints: latency-svc-tqmvk [581.217667ms] Mar 23 23:57:19.221: INFO: Created: latency-svc-sj7jg Mar 23 23:57:19.232: INFO: Got endpoints: latency-svc-sj7jg [580.625735ms] Mar 23 23:57:19.277: INFO: Created: latency-svc-8jdf4 Mar 23 23:57:19.286: INFO: Got endpoints: latency-svc-8jdf4 [599.219284ms] Mar 23 23:57:19.321: INFO: Created: latency-svc-tjwgv Mar 23 23:57:19.346: INFO: Got endpoints: latency-svc-tjwgv [595.250714ms] Mar 23 23:57:19.362: INFO: Created: latency-svc-b82lr Mar 23 23:57:19.416: INFO: Got endpoints: latency-svc-b82lr [639.215476ms] Mar 23 23:57:19.419: INFO: Created: latency-svc-b5nhg Mar 23 23:57:19.432: INFO: Got endpoints: latency-svc-b5nhg [607.765638ms] Mar 23 23:57:19.449: INFO: Created: latency-svc-d5rsb Mar 23 23:57:19.476: INFO: Got endpoints: latency-svc-d5rsb [588.271954ms] Mar 23 23:57:19.500: INFO: Created: latency-svc-pfrb5 Mar 23 23:57:19.510: INFO: Got endpoints: latency-svc-pfrb5 [598.087786ms] Mar 23 23:57:19.542: INFO: Created: latency-svc-qtqg7 Mar 23 23:57:19.552: INFO: Got endpoints: latency-svc-qtqg7 [593.527916ms] Mar 23 23:57:19.567: INFO: Created: latency-svc-rcjkh Mar 23 23:57:19.593: INFO: Got endpoints: latency-svc-rcjkh [582.31849ms] Mar 23 23:57:19.623: INFO: Created: latency-svc-czbgb Mar 23 23:57:19.672: INFO: Got endpoints: latency-svc-czbgb [647.35809ms] Mar 23 23:57:19.674: INFO: Created: latency-svc-cp6dm Mar 23 23:57:19.681: INFO: Got endpoints: latency-svc-cp6dm [625.82194ms] Mar 23 23:57:19.705: INFO: Created: latency-svc-fkmzc Mar 23 23:57:19.734: INFO: Got endpoints: latency-svc-fkmzc [643.725248ms] Mar 23 23:57:19.767: INFO: Created: latency-svc-6w8rn Mar 23 23:57:19.816: INFO: Got endpoints: latency-svc-6w8rn [674.784205ms] Mar 23 23:57:19.818: INFO: Created: latency-svc-m6526 Mar 23 23:57:19.824: INFO: Got endpoints: latency-svc-m6526 [652.685868ms] Mar 23 23:57:19.839: INFO: Created: latency-svc-rqpj5 Mar 23 23:57:19.861: INFO: Got endpoints: latency-svc-rqpj5 [652.561121ms] Mar 23 23:57:19.879: INFO: Created: latency-svc-f2k69 Mar 23 23:57:19.891: INFO: Got endpoints: latency-svc-f2k69 [659.049113ms] Mar 23 23:57:19.915: INFO: Created: latency-svc-jpfdm Mar 23 23:57:19.972: INFO: Got endpoints: latency-svc-jpfdm [686.090792ms] Mar 23 23:57:19.973: INFO: Created: latency-svc-fbldx Mar 23 23:57:19.977: INFO: Got endpoints: latency-svc-fbldx [631.373593ms] Mar 23 23:57:19.995: INFO: Created: latency-svc-jf959 Mar 23 23:57:20.008: INFO: Got endpoints: latency-svc-jf959 [591.728597ms] Mar 23 23:57:20.025: INFO: Created: latency-svc-znlrb Mar 23 23:57:20.038: INFO: Got endpoints: latency-svc-znlrb [605.612464ms] Mar 23 23:57:20.065: INFO: Created: latency-svc-hz8g4 Mar 23 23:57:20.116: INFO: Got endpoints: latency-svc-hz8g4 [639.294979ms] Mar 23 23:57:20.131: INFO: Created: latency-svc-k6zr2 Mar 23 23:57:20.139: INFO: Got endpoints: latency-svc-k6zr2 [629.05059ms] Mar 23 23:57:20.157: INFO: Created: latency-svc-zfqtr Mar 23 23:57:20.169: INFO: Got endpoints: latency-svc-zfqtr [616.933515ms] Mar 23 23:57:20.193: INFO: Created: latency-svc-8gdh9 Mar 23 23:57:20.205: INFO: Got endpoints: latency-svc-8gdh9 [611.932009ms] Mar 23 23:57:20.247: INFO: Created: latency-svc-zdvht Mar 23 23:57:20.275: INFO: Got endpoints: latency-svc-zdvht [602.327502ms] Mar 23 23:57:20.275: INFO: Created: latency-svc-skmx6 Mar 23 23:57:20.292: INFO: Got endpoints: latency-svc-skmx6 [611.542833ms] Mar 23 23:57:20.310: INFO: Created: latency-svc-wlzhg Mar 23 23:57:20.328: INFO: Got endpoints: latency-svc-wlzhg [593.503255ms] Mar 23 23:57:20.379: INFO: Created: latency-svc-fpfxf Mar 23 23:57:20.403: INFO: Got endpoints: latency-svc-fpfxf [586.982963ms] Mar 23 23:57:20.404: INFO: Created: latency-svc-hbd2x Mar 23 23:57:20.418: INFO: Got endpoints: latency-svc-hbd2x [593.70964ms] Mar 23 23:57:20.433: INFO: Created: latency-svc-s8cmp Mar 23 23:57:20.442: INFO: Got endpoints: latency-svc-s8cmp [581.30825ms] Mar 23 23:57:20.523: INFO: Created: latency-svc-xcwzc Mar 23 23:57:20.544: INFO: Got endpoints: latency-svc-xcwzc [653.67813ms] Mar 23 23:57:20.545: INFO: Created: latency-svc-v78wx Mar 23 23:57:20.553: INFO: Got endpoints: latency-svc-v78wx [580.98716ms] Mar 23 23:57:20.569: INFO: Created: latency-svc-p4fgd Mar 23 23:57:20.585: INFO: Got endpoints: latency-svc-p4fgd [607.834152ms] Mar 23 23:57:20.601: INFO: Created: latency-svc-hjqdz Mar 23 23:57:20.619: INFO: Got endpoints: latency-svc-hjqdz [611.300134ms] Mar 23 23:57:20.654: INFO: Created: latency-svc-qbqvx Mar 23 23:57:20.661: INFO: Got endpoints: latency-svc-qbqvx [623.550884ms] Mar 23 23:57:20.695: INFO: Created: latency-svc-r66gh Mar 23 23:57:20.709: INFO: Got endpoints: latency-svc-r66gh [592.855693ms] Mar 23 23:57:20.725: INFO: Created: latency-svc-kcq9h Mar 23 23:57:20.733: INFO: Got endpoints: latency-svc-kcq9h [593.111774ms] Mar 23 23:57:20.733: INFO: Latencies: [50.491757ms 80.76091ms 127.91213ms 163.773247ms 199.989155ms 266.191343ms 277.668631ms 307.714349ms 396.376166ms 424.009784ms 522.129312ms 556.51135ms 564.106302ms 580.625735ms 580.659471ms 580.98716ms 581.217667ms 581.30825ms 582.31849ms 584.880131ms 585.695017ms 586.982963ms 587.019343ms 588.271954ms 591.728597ms 592.855693ms 593.111774ms 593.503255ms 593.527916ms 593.70964ms 593.852059ms 595.250714ms 595.792423ms 596.02063ms 596.047608ms 598.087786ms 599.219284ms 599.608381ms 602.115043ms 602.327502ms 605.612464ms 607.735002ms 607.765638ms 607.834152ms 608.311667ms 611.028789ms 611.300134ms 611.30237ms 611.493296ms 611.542833ms 611.680012ms 611.704528ms 611.932009ms 614.97538ms 616.678499ms 616.697792ms 616.933515ms 617.567967ms 618.499526ms 618.949724ms 621.545015ms 622.213134ms 622.609574ms 623.550884ms 623.867169ms 624.755174ms 625.82194ms 628.826466ms 629.05059ms 631.373593ms 631.860335ms 631.872033ms 632.793084ms 634.182431ms 634.848007ms 635.474415ms 637.813261ms 638.709655ms 639.215476ms 639.294979ms 640.441114ms 641.449086ms 642.068306ms 643.2119ms 643.725248ms 647.165819ms 647.35809ms 652.561121ms 652.685868ms 653.266247ms 653.67813ms 653.768418ms 653.935083ms 658.547147ms 659.049113ms 662.486586ms 663.585994ms 664.863261ms 665.168957ms 665.530668ms 665.688023ms 666.9643ms 670.955244ms 672.861459ms 674.784205ms 677.403087ms 677.411632ms 682.320989ms 682.701439ms 685.58227ms 686.090792ms 687.07258ms 690.819913ms 694.916672ms 695.420482ms 697.374885ms 701.269392ms 701.375649ms 705.905005ms 707.097489ms 708.254611ms 709.179786ms 710.340424ms 718.166914ms 719.293152ms 720.786414ms 721.915801ms 724.548119ms 729.123506ms 729.452092ms 736.27401ms 737.068288ms 742.821617ms 745.673732ms 746.04202ms 748.027564ms 749.055972ms 749.583545ms 751.115601ms 752.973469ms 754.831358ms 754.892472ms 754.931415ms 755.375987ms 765.645058ms 770.287806ms 775.604068ms 777.39458ms 777.757593ms 778.711816ms 778.993253ms 783.170342ms 783.964313ms 785.026617ms 785.325939ms 786.431869ms 787.637066ms 791.200789ms 791.581236ms 792.24342ms 795.756864ms 796.461495ms 803.553375ms 812.568868ms 816.255573ms 820.629893ms 820.91819ms 827.890234ms 831.273124ms 832.934885ms 838.475082ms 838.717339ms 846.698993ms 851.989162ms 859.993107ms 863.877209ms 876.225967ms 878.940001ms 884.374216ms 884.588781ms 892.358889ms 895.113047ms 925.63953ms 934.385472ms 999.417925ms 1.344370533s 1.619803666s 1.625643656s 1.646567974s 1.676577883s 1.685201553s 1.690185861s 1.700857968s 1.701175216s 1.714839049s 1.722246723s 1.724464154s 1.724681975s 1.727919521s 1.745797069s] Mar 23 23:57:20.733: INFO: 50 %ile: 665.688023ms Mar 23 23:57:20.733: INFO: 90 %ile: 892.358889ms Mar 23 23:57:20.733: INFO: 99 %ile: 1.727919521s Mar 23 23:57:20.733: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:20.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1146" for this suite. • [SLOW TEST:14.445 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":96,"skipped":1448,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:20.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 23 23:57:21.673: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 23 23:57:23.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604641, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604641, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604641, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604641, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 23 23:57:26.718: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:26.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2413" for this suite. STEP: Destroying namespace "webhook-2413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.433 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":97,"skipped":1457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:27.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 23 23:57:27.327: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3329" to be "Succeeded or Failed" Mar 23 23:57:27.333: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.777083ms Mar 23 23:57:29.374: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047359802s Mar 23 23:57:31.442: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115177499s Mar 23 23:57:33.445: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117999087s STEP: Saw pod success Mar 23 23:57:33.445: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 23 23:57:33.456: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 23 23:57:33.781: INFO: Waiting for pod pod-host-path-test to disappear Mar 23 23:57:33.785: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:33.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3329" for this suite. • [SLOW TEST:6.669 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1484,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:33.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:39.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2385" for this suite. • [SLOW TEST:5.282 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":99,"skipped":1495,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:39.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 23 23:57:39.935: INFO: Waiting up to 5m0s for pod "pod-c1b26bf8-f94f-4bb6-a87c-38061b231067" in namespace "emptydir-7050" to be "Succeeded or Failed" Mar 23 23:57:39.964: INFO: Pod "pod-c1b26bf8-f94f-4bb6-a87c-38061b231067": Phase="Pending", Reason="", readiness=false. Elapsed: 28.931315ms Mar 23 23:57:42.032: INFO: Pod "pod-c1b26bf8-f94f-4bb6-a87c-38061b231067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097466845s Mar 23 23:57:44.053: INFO: Pod "pod-c1b26bf8-f94f-4bb6-a87c-38061b231067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118414024s STEP: Saw pod success Mar 23 23:57:44.053: INFO: Pod "pod-c1b26bf8-f94f-4bb6-a87c-38061b231067" satisfied condition "Succeeded or Failed" Mar 23 23:57:44.084: INFO: Trying to get logs from node latest-worker2 pod pod-c1b26bf8-f94f-4bb6-a87c-38061b231067 container test-container: STEP: delete the pod Mar 23 23:57:44.170: INFO: Waiting for pod pod-c1b26bf8-f94f-4bb6-a87c-38061b231067 to disappear Mar 23 23:57:44.186: INFO: Pod pod-c1b26bf8-f94f-4bb6-a87c-38061b231067 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:44.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7050" for this suite. • [SLOW TEST:5.071 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1501,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:44.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-bc871221-fa24-4dd7-aaf8-45bdacefb484 STEP: Creating a pod to test consume secrets Mar 23 23:57:44.374: INFO: Waiting up to 5m0s for pod "pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2" in namespace "secrets-6331" to be "Succeeded or Failed" Mar 23 23:57:44.396: INFO: Pod "pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.393971ms Mar 23 23:57:46.446: INFO: Pod "pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071871682s Mar 23 23:57:48.451: INFO: Pod "pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077413091s STEP: Saw pod success Mar 23 23:57:48.451: INFO: Pod "pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2" satisfied condition "Succeeded or Failed" Mar 23 23:57:48.454: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2 container secret-volume-test: STEP: delete the pod Mar 23 23:57:48.721: INFO: Waiting for pod pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2 to disappear Mar 23 23:57:48.745: INFO: Pod pod-secrets-2fddf348-52ea-4a31-a745-d9d813f490c2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:48.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6331" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1505,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:48.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:48.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7743" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":102,"skipped":1513,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:49.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:57:49.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7" in namespace "projected-4561" to be "Succeeded or Failed" Mar 23 23:57:49.184: INFO: Pod "downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7": Phase="Pending", Reason="", readiness=false. Elapsed: 53.985047ms Mar 23 23:57:51.224: INFO: Pod "downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09390305s Mar 23 23:57:53.228: INFO: Pod "downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097847611s STEP: Saw pod success Mar 23 23:57:53.228: INFO: Pod "downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7" satisfied condition "Succeeded or Failed" Mar 23 23:57:53.231: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7 container client-container: STEP: delete the pod Mar 23 23:57:53.264: INFO: Waiting for pod downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7 to disappear Mar 23 23:57:53.268: INFO: Pod downwardapi-volume-add822f6-5e37-4261-a02c-b0b1a6ea63a7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:53.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4561" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1541,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:53.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 23 23:57:53.313: INFO: Waiting up to 5m0s for pod "pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2" in namespace "emptydir-9954" to be "Succeeded or Failed" Mar 23 23:57:53.350: INFO: Pod "pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.193703ms Mar 23 23:57:55.367: INFO: Pod "pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054443533s Mar 23 23:57:57.371: INFO: Pod "pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058298339s STEP: Saw pod success Mar 23 23:57:57.371: INFO: Pod "pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2" satisfied condition "Succeeded or Failed" Mar 23 23:57:57.374: INFO: Trying to get logs from node latest-worker2 pod pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2 container test-container: STEP: delete the pod Mar 23 23:57:57.402: INFO: Waiting for pod pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2 to disappear Mar 23 23:57:57.406: INFO: Pod pod-ad0d1d4a-26b5-4ab4-89fb-5b7f7152d2c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:57:57.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9954" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1552,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:57:57.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 23 23:57:57.498: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:04.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1548" for this suite. • [SLOW TEST:7.384 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":105,"skipped":1590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:04.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 23 23:58:09.426: INFO: Successfully updated pod "labelsupdatebd7afa9a-cd89-424b-b064-c714773ad6c5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:11.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2258" for this suite. • [SLOW TEST:6.681 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1616,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:11.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:58:11.559: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8" in namespace "projected-1288" to be "Succeeded or Failed" Mar 23 23:58:11.587: INFO: Pod "downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.183883ms Mar 23 23:58:13.591: INFO: Pod "downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032488198s Mar 23 23:58:15.596: INFO: Pod "downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036895705s STEP: Saw pod success Mar 23 23:58:15.596: INFO: Pod "downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8" satisfied condition "Succeeded or Failed" Mar 23 23:58:15.599: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8 container client-container: STEP: delete the pod Mar 23 23:58:15.618: INFO: Waiting for pod downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8 to disappear Mar 23 23:58:15.635: INFO: Pod downwardapi-volume-bf2d8fb2-86d7-4e32-bf39-bac00a7bc1c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:15.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1288" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1628,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:15.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 23 23:58:15.691: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e" in namespace "downward-api-5907" to be "Succeeded or Failed" Mar 23 23:58:15.715: INFO: Pod "downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.35043ms Mar 23 23:58:17.719: INFO: Pod "downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028034019s Mar 23 23:58:19.723: INFO: Pod "downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032329925s STEP: Saw pod success Mar 23 23:58:19.723: INFO: Pod "downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e" satisfied condition "Succeeded or Failed" Mar 23 23:58:19.726: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e container client-container: STEP: delete the pod Mar 23 23:58:19.740: INFO: Waiting for pod downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e to disappear Mar 23 23:58:19.745: INFO: Pod downwardapi-volume-b354233a-5c77-45e3-aa07-a90809e0af7e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:19.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5907" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1632,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:19.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:58:19.796: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:20.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3477" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":109,"skipped":1642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:20.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 23 23:58:20.873: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:27.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5341" for this suite. • [SLOW TEST:6.283 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":110,"skipped":1669,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:27.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 23 23:58:27.182: INFO: Waiting up to 5m0s for pod "var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e" in namespace "var-expansion-1350" to be "Succeeded or Failed" Mar 23 23:58:27.186: INFO: Pod "var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042939ms Mar 23 23:58:29.190: INFO: Pod "var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008091223s Mar 23 23:58:31.194: INFO: Pod "var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01193668s STEP: Saw pod success Mar 23 23:58:31.194: INFO: Pod "var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e" satisfied condition "Succeeded or Failed" Mar 23 23:58:31.197: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e container dapi-container: STEP: delete the pod Mar 23 23:58:31.219: INFO: Waiting for pod var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e to disappear Mar 23 23:58:31.239: INFO: Pod var-expansion-e26dae0e-e7f6-4f21-88f4-4ba56760a36e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:31.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1350" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1670,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:31.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:37.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1300" for this suite. • [SLOW TEST:6.079 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1676,"failed":0} SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:37.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 23 23:58:51.462: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.462: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:51.496494 7 log.go:172] (0xc001fcfc30) (0xc000ca5400) Create stream I0323 23:58:51.496543 7 log.go:172] (0xc001fcfc30) (0xc000ca5400) Stream added, broadcasting: 1 I0323 23:58:51.499676 7 log.go:172] (0xc001fcfc30) Reply frame received for 1 I0323 23:58:51.499725 7 log.go:172] (0xc001fcfc30) (0xc000ca54a0) Create stream I0323 23:58:51.499736 7 log.go:172] (0xc001fcfc30) (0xc000ca54a0) Stream added, broadcasting: 3 I0323 23:58:51.500858 7 log.go:172] (0xc001fcfc30) Reply frame received for 3 I0323 23:58:51.500901 7 log.go:172] (0xc001fcfc30) (0xc000b9ba40) Create stream I0323 23:58:51.500915 7 log.go:172] (0xc001fcfc30) (0xc000b9ba40) Stream added, broadcasting: 5 I0323 23:58:51.502271 7 log.go:172] (0xc001fcfc30) Reply frame received for 5 I0323 23:58:51.572201 7 log.go:172] (0xc001fcfc30) Data frame received for 3 I0323 23:58:51.572248 7 log.go:172] (0xc000ca54a0) (3) Data frame handling I0323 23:58:51.572277 7 log.go:172] (0xc000ca54a0) (3) Data frame sent I0323 23:58:51.572370 7 log.go:172] (0xc001fcfc30) Data frame received for 5 I0323 23:58:51.572470 7 log.go:172] (0xc000b9ba40) (5) Data frame handling I0323 23:58:51.572513 7 log.go:172] (0xc001fcfc30) Data frame received for 3 I0323 23:58:51.572530 7 log.go:172] (0xc000ca54a0) (3) Data frame handling I0323 23:58:51.574138 7 log.go:172] (0xc001fcfc30) Data frame received for 1 I0323 23:58:51.574165 7 log.go:172] (0xc000ca5400) (1) Data frame handling I0323 23:58:51.574179 7 log.go:172] (0xc000ca5400) (1) Data frame sent I0323 23:58:51.574198 7 log.go:172] (0xc001fcfc30) (0xc000ca5400) Stream removed, broadcasting: 1 I0323 23:58:51.574225 7 log.go:172] (0xc001fcfc30) Go away received I0323 23:58:51.574379 7 log.go:172] (0xc001fcfc30) (0xc000ca5400) Stream removed, broadcasting: 1 I0323 23:58:51.574415 7 log.go:172] (0xc001fcfc30) (0xc000ca54a0) Stream removed, broadcasting: 3 I0323 23:58:51.574430 7 log.go:172] (0xc001fcfc30) (0xc000b9ba40) Stream removed, broadcasting: 5 Mar 23 23:58:51.574: INFO: Exec stderr: "" Mar 23 23:58:51.574: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.574: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:51.606675 7 log.go:172] (0xc0023db970) (0xc0009aa320) Create stream I0323 23:58:51.606705 7 log.go:172] (0xc0023db970) (0xc0009aa320) Stream added, broadcasting: 1 I0323 23:58:51.609652 7 log.go:172] (0xc0023db970) Reply frame received for 1 I0323 23:58:51.609679 7 log.go:172] (0xc0023db970) (0xc002460000) Create stream I0323 23:58:51.609687 7 log.go:172] (0xc0023db970) (0xc002460000) Stream added, broadcasting: 3 I0323 23:58:51.610758 7 log.go:172] (0xc0023db970) Reply frame received for 3 I0323 23:58:51.610781 7 log.go:172] (0xc0023db970) (0xc0024600a0) Create stream I0323 23:58:51.610789 7 log.go:172] (0xc0023db970) (0xc0024600a0) Stream added, broadcasting: 5 I0323 23:58:51.611614 7 log.go:172] (0xc0023db970) Reply frame received for 5 I0323 23:58:51.665569 7 log.go:172] (0xc0023db970) Data frame received for 3 I0323 23:58:51.665602 7 log.go:172] (0xc002460000) (3) Data frame handling I0323 23:58:51.665613 7 log.go:172] (0xc002460000) (3) Data frame sent I0323 23:58:51.665620 7 log.go:172] (0xc0023db970) Data frame received for 3 I0323 23:58:51.665625 7 log.go:172] (0xc002460000) (3) Data frame handling I0323 23:58:51.665652 7 log.go:172] (0xc0023db970) Data frame received for 5 I0323 23:58:51.665666 7 log.go:172] (0xc0024600a0) (5) Data frame handling I0323 23:58:51.667002 7 log.go:172] (0xc0023db970) Data frame received for 1 I0323 23:58:51.667027 7 log.go:172] (0xc0009aa320) (1) Data frame handling I0323 23:58:51.667048 7 log.go:172] (0xc0009aa320) (1) Data frame sent I0323 23:58:51.667069 7 log.go:172] (0xc0023db970) (0xc0009aa320) Stream removed, broadcasting: 1 I0323 23:58:51.667087 7 log.go:172] (0xc0023db970) Go away received I0323 23:58:51.667179 7 log.go:172] (0xc0023db970) (0xc0009aa320) Stream removed, broadcasting: 1 I0323 23:58:51.667206 7 log.go:172] (0xc0023db970) (0xc002460000) Stream removed, broadcasting: 3 I0323 23:58:51.667220 7 log.go:172] (0xc0023db970) (0xc0024600a0) Stream removed, broadcasting: 5 Mar 23 23:58:51.667: INFO: Exec stderr: "" Mar 23 23:58:51.667: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.667: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:51.699754 7 log.go:172] (0xc002478000) (0xc0009aaa00) Create stream I0323 23:58:51.699788 7 log.go:172] (0xc002478000) (0xc0009aaa00) Stream added, broadcasting: 1 I0323 23:58:51.702572 7 log.go:172] (0xc002478000) Reply frame received for 1 I0323 23:58:51.702620 7 log.go:172] (0xc002478000) (0xc002460280) Create stream I0323 23:58:51.702637 7 log.go:172] (0xc002478000) (0xc002460280) Stream added, broadcasting: 3 I0323 23:58:51.704135 7 log.go:172] (0xc002478000) Reply frame received for 3 I0323 23:58:51.704202 7 log.go:172] (0xc002478000) (0xc002460320) Create stream I0323 23:58:51.704231 7 log.go:172] (0xc002478000) (0xc002460320) Stream added, broadcasting: 5 I0323 23:58:51.705589 7 log.go:172] (0xc002478000) Reply frame received for 5 I0323 23:58:51.760733 7 log.go:172] (0xc002478000) Data frame received for 5 I0323 23:58:51.760826 7 log.go:172] (0xc002460320) (5) Data frame handling I0323 23:58:51.760879 7 log.go:172] (0xc002478000) Data frame received for 3 I0323 23:58:51.760906 7 log.go:172] (0xc002460280) (3) Data frame handling I0323 23:58:51.760948 7 log.go:172] (0xc002460280) (3) Data frame sent I0323 23:58:51.760974 7 log.go:172] (0xc002478000) Data frame received for 3 I0323 23:58:51.760987 7 log.go:172] (0xc002460280) (3) Data frame handling I0323 23:58:51.762725 7 log.go:172] (0xc002478000) Data frame received for 1 I0323 23:58:51.762758 7 log.go:172] (0xc0009aaa00) (1) Data frame handling I0323 23:58:51.762777 7 log.go:172] (0xc0009aaa00) (1) Data frame sent I0323 23:58:51.762793 7 log.go:172] (0xc002478000) (0xc0009aaa00) Stream removed, broadcasting: 1 I0323 23:58:51.762814 7 log.go:172] (0xc002478000) Go away received I0323 23:58:51.763010 7 log.go:172] (0xc002478000) (0xc0009aaa00) Stream removed, broadcasting: 1 I0323 23:58:51.763052 7 log.go:172] (0xc002478000) (0xc002460280) Stream removed, broadcasting: 3 I0323 23:58:51.763092 7 log.go:172] (0xc002478000) (0xc002460320) Stream removed, broadcasting: 5 Mar 23 23:58:51.763: INFO: Exec stderr: "" Mar 23 23:58:51.763: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.763: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:51.797245 7 log.go:172] (0xc0024ec2c0) (0xc000ca5860) Create stream I0323 23:58:51.797273 7 log.go:172] (0xc0024ec2c0) (0xc000ca5860) Stream added, broadcasting: 1 I0323 23:58:51.799588 7 log.go:172] (0xc0024ec2c0) Reply frame received for 1 I0323 23:58:51.799615 7 log.go:172] (0xc0024ec2c0) (0xc0009aac80) Create stream I0323 23:58:51.799625 7 log.go:172] (0xc0024ec2c0) (0xc0009aac80) Stream added, broadcasting: 3 I0323 23:58:51.800540 7 log.go:172] (0xc0024ec2c0) Reply frame received for 3 I0323 23:58:51.800587 7 log.go:172] (0xc0024ec2c0) (0xc002946000) Create stream I0323 23:58:51.800605 7 log.go:172] (0xc0024ec2c0) (0xc002946000) Stream added, broadcasting: 5 I0323 23:58:51.801817 7 log.go:172] (0xc0024ec2c0) Reply frame received for 5 I0323 23:58:51.859895 7 log.go:172] (0xc0024ec2c0) Data frame received for 5 I0323 23:58:51.859918 7 log.go:172] (0xc002946000) (5) Data frame handling I0323 23:58:51.859941 7 log.go:172] (0xc0024ec2c0) Data frame received for 3 I0323 23:58:51.859949 7 log.go:172] (0xc0009aac80) (3) Data frame handling I0323 23:58:51.859956 7 log.go:172] (0xc0009aac80) (3) Data frame sent I0323 23:58:51.859970 7 log.go:172] (0xc0024ec2c0) Data frame received for 3 I0323 23:58:51.859978 7 log.go:172] (0xc0009aac80) (3) Data frame handling I0323 23:58:51.861847 7 log.go:172] (0xc0024ec2c0) Data frame received for 1 I0323 23:58:51.861891 7 log.go:172] (0xc000ca5860) (1) Data frame handling I0323 23:58:51.861922 7 log.go:172] (0xc000ca5860) (1) Data frame sent I0323 23:58:51.861949 7 log.go:172] (0xc0024ec2c0) (0xc000ca5860) Stream removed, broadcasting: 1 I0323 23:58:51.861995 7 log.go:172] (0xc0024ec2c0) Go away received I0323 23:58:51.862042 7 log.go:172] (0xc0024ec2c0) (0xc000ca5860) Stream removed, broadcasting: 1 I0323 23:58:51.862069 7 log.go:172] (0xc0024ec2c0) (0xc0009aac80) Stream removed, broadcasting: 3 I0323 23:58:51.862091 7 log.go:172] (0xc0024ec2c0) (0xc002946000) Stream removed, broadcasting: 5 Mar 23 23:58:51.862: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 23 23:58:51.862: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.862: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:51.892884 7 log.go:172] (0xc002620370) (0xc0024605a0) Create stream I0323 23:58:51.892915 7 log.go:172] (0xc002620370) (0xc0024605a0) Stream added, broadcasting: 1 I0323 23:58:51.895592 7 log.go:172] (0xc002620370) Reply frame received for 1 I0323 23:58:51.895631 7 log.go:172] (0xc002620370) (0xc0009aad20) Create stream I0323 23:58:51.895646 7 log.go:172] (0xc002620370) (0xc0009aad20) Stream added, broadcasting: 3 I0323 23:58:51.896857 7 log.go:172] (0xc002620370) Reply frame received for 3 I0323 23:58:51.896957 7 log.go:172] (0xc002620370) (0xc0009aaf00) Create stream I0323 23:58:51.896979 7 log.go:172] (0xc002620370) (0xc0009aaf00) Stream added, broadcasting: 5 I0323 23:58:51.898348 7 log.go:172] (0xc002620370) Reply frame received for 5 I0323 23:58:51.969396 7 log.go:172] (0xc002620370) Data frame received for 5 I0323 23:58:51.969435 7 log.go:172] (0xc0009aaf00) (5) Data frame handling I0323 23:58:51.969462 7 log.go:172] (0xc002620370) Data frame received for 3 I0323 23:58:51.969486 7 log.go:172] (0xc0009aad20) (3) Data frame handling I0323 23:58:51.969517 7 log.go:172] (0xc0009aad20) (3) Data frame sent I0323 23:58:51.969541 7 log.go:172] (0xc002620370) Data frame received for 3 I0323 23:58:51.969558 7 log.go:172] (0xc0009aad20) (3) Data frame handling I0323 23:58:51.970975 7 log.go:172] (0xc002620370) Data frame received for 1 I0323 23:58:51.971023 7 log.go:172] (0xc0024605a0) (1) Data frame handling I0323 23:58:51.971055 7 log.go:172] (0xc0024605a0) (1) Data frame sent I0323 23:58:51.971079 7 log.go:172] (0xc002620370) (0xc0024605a0) Stream removed, broadcasting: 1 I0323 23:58:51.971131 7 log.go:172] (0xc002620370) Go away received I0323 23:58:51.971219 7 log.go:172] (0xc002620370) (0xc0024605a0) Stream removed, broadcasting: 1 I0323 23:58:51.971237 7 log.go:172] (0xc002620370) (0xc0009aad20) Stream removed, broadcasting: 3 I0323 23:58:51.971255 7 log.go:172] (0xc002620370) (0xc0009aaf00) Stream removed, broadcasting: 5 Mar 23 23:58:51.971: INFO: Exec stderr: "" Mar 23 23:58:51.971: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:51.971: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:52.006800 7 log.go:172] (0xc0026209a0) (0xc002460780) Create stream I0323 23:58:52.006831 7 log.go:172] (0xc0026209a0) (0xc002460780) Stream added, broadcasting: 1 I0323 23:58:52.009443 7 log.go:172] (0xc0026209a0) Reply frame received for 1 I0323 23:58:52.009482 7 log.go:172] (0xc0026209a0) (0xc000b9bae0) Create stream I0323 23:58:52.009497 7 log.go:172] (0xc0026209a0) (0xc000b9bae0) Stream added, broadcasting: 3 I0323 23:58:52.010493 7 log.go:172] (0xc0026209a0) Reply frame received for 3 I0323 23:58:52.010542 7 log.go:172] (0xc0026209a0) (0xc000b9bcc0) Create stream I0323 23:58:52.010558 7 log.go:172] (0xc0026209a0) (0xc000b9bcc0) Stream added, broadcasting: 5 I0323 23:58:52.011567 7 log.go:172] (0xc0026209a0) Reply frame received for 5 I0323 23:58:52.082529 7 log.go:172] (0xc0026209a0) Data frame received for 3 I0323 23:58:52.082567 7 log.go:172] (0xc000b9bae0) (3) Data frame handling I0323 23:58:52.082582 7 log.go:172] (0xc000b9bae0) (3) Data frame sent I0323 23:58:52.082600 7 log.go:172] (0xc0026209a0) Data frame received for 3 I0323 23:58:52.082617 7 log.go:172] (0xc000b9bae0) (3) Data frame handling I0323 23:58:52.082654 7 log.go:172] (0xc0026209a0) Data frame received for 5 I0323 23:58:52.082705 7 log.go:172] (0xc000b9bcc0) (5) Data frame handling I0323 23:58:52.084210 7 log.go:172] (0xc0026209a0) Data frame received for 1 I0323 23:58:52.084255 7 log.go:172] (0xc002460780) (1) Data frame handling I0323 23:58:52.084289 7 log.go:172] (0xc002460780) (1) Data frame sent I0323 23:58:52.084332 7 log.go:172] (0xc0026209a0) (0xc002460780) Stream removed, broadcasting: 1 I0323 23:58:52.084363 7 log.go:172] (0xc0026209a0) Go away received I0323 23:58:52.084485 7 log.go:172] (0xc0026209a0) (0xc002460780) Stream removed, broadcasting: 1 I0323 23:58:52.084509 7 log.go:172] (0xc0026209a0) (0xc000b9bae0) Stream removed, broadcasting: 3 I0323 23:58:52.084520 7 log.go:172] (0xc0026209a0) (0xc000b9bcc0) Stream removed, broadcasting: 5 Mar 23 23:58:52.084: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 23 23:58:52.084: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:52.084: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:52.115054 7 log.go:172] (0xc002952370) (0xc002946280) Create stream I0323 23:58:52.115084 7 log.go:172] (0xc002952370) (0xc002946280) Stream added, broadcasting: 1 I0323 23:58:52.122501 7 log.go:172] (0xc002952370) Reply frame received for 1 I0323 23:58:52.122538 7 log.go:172] (0xc002952370) (0xc000b9bd60) Create stream I0323 23:58:52.122548 7 log.go:172] (0xc002952370) (0xc000b9bd60) Stream added, broadcasting: 3 I0323 23:58:52.123323 7 log.go:172] (0xc002952370) Reply frame received for 3 I0323 23:58:52.123344 7 log.go:172] (0xc002952370) (0xc000b9bea0) Create stream I0323 23:58:52.123352 7 log.go:172] (0xc002952370) (0xc000b9bea0) Stream added, broadcasting: 5 I0323 23:58:52.124033 7 log.go:172] (0xc002952370) Reply frame received for 5 I0323 23:58:52.184837 7 log.go:172] (0xc002952370) Data frame received for 5 I0323 23:58:52.184888 7 log.go:172] (0xc000b9bea0) (5) Data frame handling I0323 23:58:52.184931 7 log.go:172] (0xc002952370) Data frame received for 3 I0323 23:58:52.184949 7 log.go:172] (0xc000b9bd60) (3) Data frame handling I0323 23:58:52.184964 7 log.go:172] (0xc000b9bd60) (3) Data frame sent I0323 23:58:52.184993 7 log.go:172] (0xc002952370) Data frame received for 3 I0323 23:58:52.185007 7 log.go:172] (0xc000b9bd60) (3) Data frame handling I0323 23:58:52.186786 7 log.go:172] (0xc002952370) Data frame received for 1 I0323 23:58:52.186826 7 log.go:172] (0xc002946280) (1) Data frame handling I0323 23:58:52.186859 7 log.go:172] (0xc002946280) (1) Data frame sent I0323 23:58:52.186909 7 log.go:172] (0xc002952370) (0xc002946280) Stream removed, broadcasting: 1 I0323 23:58:52.186943 7 log.go:172] (0xc002952370) Go away received I0323 23:58:52.187093 7 log.go:172] (0xc002952370) (0xc002946280) Stream removed, broadcasting: 1 I0323 23:58:52.187138 7 log.go:172] (0xc002952370) (0xc000b9bd60) Stream removed, broadcasting: 3 I0323 23:58:52.187158 7 log.go:172] (0xc002952370) (0xc000b9bea0) Stream removed, broadcasting: 5 Mar 23 23:58:52.187: INFO: Exec stderr: "" Mar 23 23:58:52.187: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:52.187: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:52.219097 7 log.go:172] (0xc002478630) (0xc0009ab360) Create stream I0323 23:58:52.219120 7 log.go:172] (0xc002478630) (0xc0009ab360) Stream added, broadcasting: 1 I0323 23:58:52.221547 7 log.go:172] (0xc002478630) Reply frame received for 1 I0323 23:58:52.221592 7 log.go:172] (0xc002478630) (0xc000ca59a0) Create stream I0323 23:58:52.221608 7 log.go:172] (0xc002478630) (0xc000ca59a0) Stream added, broadcasting: 3 I0323 23:58:52.222670 7 log.go:172] (0xc002478630) Reply frame received for 3 I0323 23:58:52.222719 7 log.go:172] (0xc002478630) (0xc000ca5a40) Create stream I0323 23:58:52.222736 7 log.go:172] (0xc002478630) (0xc000ca5a40) Stream added, broadcasting: 5 I0323 23:58:52.223669 7 log.go:172] (0xc002478630) Reply frame received for 5 I0323 23:58:52.286325 7 log.go:172] (0xc002478630) Data frame received for 5 I0323 23:58:52.286482 7 log.go:172] (0xc000ca5a40) (5) Data frame handling I0323 23:58:52.286527 7 log.go:172] (0xc002478630) Data frame received for 3 I0323 23:58:52.286550 7 log.go:172] (0xc000ca59a0) (3) Data frame handling I0323 23:58:52.286585 7 log.go:172] (0xc000ca59a0) (3) Data frame sent I0323 23:58:52.286627 7 log.go:172] (0xc002478630) Data frame received for 3 I0323 23:58:52.286649 7 log.go:172] (0xc000ca59a0) (3) Data frame handling I0323 23:58:52.288022 7 log.go:172] (0xc002478630) Data frame received for 1 I0323 23:58:52.288052 7 log.go:172] (0xc0009ab360) (1) Data frame handling I0323 23:58:52.288065 7 log.go:172] (0xc0009ab360) (1) Data frame sent I0323 23:58:52.288082 7 log.go:172] (0xc002478630) (0xc0009ab360) Stream removed, broadcasting: 1 I0323 23:58:52.288123 7 log.go:172] (0xc002478630) Go away received I0323 23:58:52.288207 7 log.go:172] (0xc002478630) (0xc0009ab360) Stream removed, broadcasting: 1 I0323 23:58:52.288244 7 log.go:172] (0xc002478630) (0xc000ca59a0) Stream removed, broadcasting: 3 I0323 23:58:52.288265 7 log.go:172] (0xc002478630) (0xc000ca5a40) Stream removed, broadcasting: 5 Mar 23 23:58:52.288: INFO: Exec stderr: "" Mar 23 23:58:52.288: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:52.288: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:52.321424 7 log.go:172] (0xc001fc6790) (0xc002d501e0) Create stream I0323 23:58:52.321454 7 log.go:172] (0xc001fc6790) (0xc002d501e0) Stream added, broadcasting: 1 I0323 23:58:52.323826 7 log.go:172] (0xc001fc6790) Reply frame received for 1 I0323 23:58:52.323861 7 log.go:172] (0xc001fc6790) (0xc000ca5d60) Create stream I0323 23:58:52.323875 7 log.go:172] (0xc001fc6790) (0xc000ca5d60) Stream added, broadcasting: 3 I0323 23:58:52.324833 7 log.go:172] (0xc001fc6790) Reply frame received for 3 I0323 23:58:52.324877 7 log.go:172] (0xc001fc6790) (0xc0022be000) Create stream I0323 23:58:52.324892 7 log.go:172] (0xc001fc6790) (0xc0022be000) Stream added, broadcasting: 5 I0323 23:58:52.326127 7 log.go:172] (0xc001fc6790) Reply frame received for 5 I0323 23:58:52.384164 7 log.go:172] (0xc001fc6790) Data frame received for 3 I0323 23:58:52.384213 7 log.go:172] (0xc000ca5d60) (3) Data frame handling I0323 23:58:52.384258 7 log.go:172] (0xc000ca5d60) (3) Data frame sent I0323 23:58:52.384295 7 log.go:172] (0xc001fc6790) Data frame received for 3 I0323 23:58:52.384315 7 log.go:172] (0xc000ca5d60) (3) Data frame handling I0323 23:58:52.384338 7 log.go:172] (0xc001fc6790) Data frame received for 5 I0323 23:58:52.384355 7 log.go:172] (0xc0022be000) (5) Data frame handling I0323 23:58:52.385768 7 log.go:172] (0xc001fc6790) Data frame received for 1 I0323 23:58:52.385780 7 log.go:172] (0xc002d501e0) (1) Data frame handling I0323 23:58:52.385787 7 log.go:172] (0xc002d501e0) (1) Data frame sent I0323 23:58:52.385945 7 log.go:172] (0xc001fc6790) (0xc002d501e0) Stream removed, broadcasting: 1 I0323 23:58:52.385993 7 log.go:172] (0xc001fc6790) (0xc002d501e0) Stream removed, broadcasting: 1 I0323 23:58:52.386008 7 log.go:172] (0xc001fc6790) (0xc000ca5d60) Stream removed, broadcasting: 3 I0323 23:58:52.386111 7 log.go:172] (0xc001fc6790) (0xc0022be000) Stream removed, broadcasting: 5 I0323 23:58:52.386143 7 log.go:172] (0xc001fc6790) Go away received Mar 23 23:58:52.386: INFO: Exec stderr: "" Mar 23 23:58:52.386: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3237 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 23 23:58:52.386: INFO: >>> kubeConfig: /root/.kube/config I0323 23:58:52.417351 7 log.go:172] (0xc0024ec8f0) (0xc0022be1e0) Create stream I0323 23:58:52.417376 7 log.go:172] (0xc0024ec8f0) (0xc0022be1e0) Stream added, broadcasting: 1 I0323 23:58:52.419815 7 log.go:172] (0xc0024ec8f0) Reply frame received for 1 I0323 23:58:52.419851 7 log.go:172] (0xc0024ec8f0) (0xc002460820) Create stream I0323 23:58:52.419863 7 log.go:172] (0xc0024ec8f0) (0xc002460820) Stream added, broadcasting: 3 I0323 23:58:52.420807 7 log.go:172] (0xc0024ec8f0) Reply frame received for 3 I0323 23:58:52.420834 7 log.go:172] (0xc0024ec8f0) (0xc0024608c0) Create stream I0323 23:58:52.420844 7 log.go:172] (0xc0024ec8f0) (0xc0024608c0) Stream added, broadcasting: 5 I0323 23:58:52.421804 7 log.go:172] (0xc0024ec8f0) Reply frame received for 5 I0323 23:58:52.501087 7 log.go:172] (0xc0024ec8f0) Data frame received for 5 I0323 23:58:52.501344 7 log.go:172] (0xc0024608c0) (5) Data frame handling I0323 23:58:52.501387 7 log.go:172] (0xc0024ec8f0) Data frame received for 3 I0323 23:58:52.501482 7 log.go:172] (0xc002460820) (3) Data frame handling I0323 23:58:52.501530 7 log.go:172] (0xc002460820) (3) Data frame sent I0323 23:58:52.501559 7 log.go:172] (0xc0024ec8f0) Data frame received for 3 I0323 23:58:52.501583 7 log.go:172] (0xc002460820) (3) Data frame handling I0323 23:58:52.502762 7 log.go:172] (0xc0024ec8f0) Data frame received for 1 I0323 23:58:52.502816 7 log.go:172] (0xc0022be1e0) (1) Data frame handling I0323 23:58:52.502852 7 log.go:172] (0xc0022be1e0) (1) Data frame sent I0323 23:58:52.502982 7 log.go:172] (0xc0024ec8f0) (0xc0022be1e0) Stream removed, broadcasting: 1 I0323 23:58:52.503029 7 log.go:172] (0xc0024ec8f0) Go away received I0323 23:58:52.503157 7 log.go:172] (0xc0024ec8f0) (0xc0022be1e0) Stream removed, broadcasting: 1 I0323 23:58:52.503201 7 log.go:172] (0xc0024ec8f0) (0xc002460820) Stream removed, broadcasting: 3 I0323 23:58:52.503222 7 log.go:172] (0xc0024ec8f0) (0xc0024608c0) Stream removed, broadcasting: 5 Mar 23 23:58:52.503: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:58:52.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3237" for this suite. • [SLOW TEST:15.157 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1684,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:58:52.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-5ns2 STEP: Creating a pod to test atomic-volume-subpath Mar 23 23:58:52.596: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5ns2" in namespace "subpath-8013" to be "Succeeded or Failed" Mar 23 23:58:52.608: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.10311ms Mar 23 23:58:54.612: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016147497s Mar 23 23:58:56.616: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 4.020076823s Mar 23 23:58:58.620: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 6.024359198s Mar 23 23:59:00.625: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 8.028550151s Mar 23 23:59:02.629: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 10.032821348s Mar 23 23:59:04.633: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 12.036736644s Mar 23 23:59:06.637: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 14.040962111s Mar 23 23:59:08.641: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 16.045180241s Mar 23 23:59:10.646: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 18.050190359s Mar 23 23:59:12.651: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 20.054538423s Mar 23 23:59:14.655: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Running", Reason="", readiness=true. Elapsed: 22.058815613s Mar 23 23:59:16.662: INFO: Pod "pod-subpath-test-configmap-5ns2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066202416s STEP: Saw pod success Mar 23 23:59:16.662: INFO: Pod "pod-subpath-test-configmap-5ns2" satisfied condition "Succeeded or Failed" Mar 23 23:59:16.665: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-5ns2 container test-container-subpath-configmap-5ns2: STEP: delete the pod Mar 23 23:59:16.698: INFO: Waiting for pod pod-subpath-test-configmap-5ns2 to disappear Mar 23 23:59:16.711: INFO: Pod pod-subpath-test-configmap-5ns2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5ns2 Mar 23 23:59:16.711: INFO: Deleting pod "pod-subpath-test-configmap-5ns2" in namespace "subpath-8013" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:59:16.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8013" for this suite. • [SLOW TEST:24.211 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":114,"skipped":1696,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:59:16.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:59:22.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9478" for this suite. STEP: Destroying namespace "nsdeletetest-744" for this suite. Mar 23 23:59:23.002: INFO: Namespace nsdeletetest-744 was already deleted STEP: Destroying namespace "nsdeletetest-7494" for this suite. • [SLOW TEST:6.284 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":115,"skipped":1723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:59:23.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 23 23:59:23.070: INFO: Waiting up to 5m0s for pod "pod-094bb24e-a2e4-4da5-9787-798565fb4165" in namespace "emptydir-6176" to be "Succeeded or Failed" Mar 23 23:59:23.105: INFO: Pod "pod-094bb24e-a2e4-4da5-9787-798565fb4165": Phase="Pending", Reason="", readiness=false. Elapsed: 35.38067ms Mar 23 23:59:25.109: INFO: Pod "pod-094bb24e-a2e4-4da5-9787-798565fb4165": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039385653s Mar 23 23:59:27.113: INFO: Pod "pod-094bb24e-a2e4-4da5-9787-798565fb4165": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043486363s STEP: Saw pod success Mar 23 23:59:27.113: INFO: Pod "pod-094bb24e-a2e4-4da5-9787-798565fb4165" satisfied condition "Succeeded or Failed" Mar 23 23:59:27.116: INFO: Trying to get logs from node latest-worker2 pod pod-094bb24e-a2e4-4da5-9787-798565fb4165 container test-container: STEP: delete the pod Mar 23 23:59:27.144: INFO: Waiting for pod pod-094bb24e-a2e4-4da5-9787-798565fb4165 to disappear Mar 23 23:59:27.157: INFO: Pod pod-094bb24e-a2e4-4da5-9787-798565fb4165 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 23 23:59:27.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6176" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 23 23:59:27.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-c50a6ffd-67b0-4a0e-95ae-6f89d943960a in namespace container-probe-7855 Mar 23 23:59:31.263: INFO: Started pod busybox-c50a6ffd-67b0-4a0e-95ae-6f89d943960a in namespace container-probe-7855 STEP: checking the pod's current state and verifying that restartCount is present Mar 23 23:59:31.266: INFO: Initial restart count of pod busybox-c50a6ffd-67b0-4a0e-95ae-6f89d943960a is 0 Mar 24 00:00:17.367: INFO: Restart count of pod container-probe-7855/busybox-c50a6ffd-67b0-4a0e-95ae-6f89d943960a is now 1 (46.100819975s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:00:17.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7855" for this suite. • [SLOW TEST:50.262 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1776,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:00:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1669 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 00:00:17.526: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 24 00:00:17.619: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:00:19.623: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:00:21.758: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:00:23.624: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:00:25.623: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:00:27.624: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:00:29.624: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:00:31.624: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 24 00:00:31.630: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 24 00:00:33.634: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 24 00:00:35.634: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 24 00:00:39.702: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.182 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1669 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:00:39.702: INFO: >>> kubeConfig: /root/.kube/config I0324 00:00:39.736795 7 log.go:172] (0xc0024d22c0) (0xc002461040) Create stream I0324 00:00:39.736830 7 log.go:172] (0xc0024d22c0) (0xc002461040) Stream added, broadcasting: 1 I0324 00:00:39.738904 7 log.go:172] (0xc0024d22c0) Reply frame received for 1 I0324 00:00:39.738961 7 log.go:172] (0xc0024d22c0) (0xc000322f00) Create stream I0324 00:00:39.738974 7 log.go:172] (0xc0024d22c0) (0xc000322f00) Stream added, broadcasting: 3 I0324 00:00:39.740087 7 log.go:172] (0xc0024d22c0) Reply frame received for 3 I0324 00:00:39.740128 7 log.go:172] (0xc0024d22c0) (0xc0024610e0) Create stream I0324 00:00:39.740140 7 log.go:172] (0xc0024d22c0) (0xc0024610e0) Stream added, broadcasting: 5 I0324 00:00:39.741074 7 log.go:172] (0xc0024d22c0) Reply frame received for 5 I0324 00:00:40.818249 7 log.go:172] (0xc0024d22c0) Data frame received for 3 I0324 00:00:40.818286 7 log.go:172] (0xc000322f00) (3) Data frame handling I0324 00:00:40.818306 7 log.go:172] (0xc000322f00) (3) Data frame sent I0324 00:00:40.818440 7 log.go:172] (0xc0024d22c0) Data frame received for 5 I0324 00:00:40.818467 7 log.go:172] (0xc0024610e0) (5) Data frame handling I0324 00:00:40.818909 7 log.go:172] (0xc0024d22c0) Data frame received for 3 I0324 00:00:40.818938 7 log.go:172] (0xc000322f00) (3) Data frame handling I0324 00:00:40.820766 7 log.go:172] (0xc0024d22c0) Data frame received for 1 I0324 00:00:40.820795 7 log.go:172] (0xc002461040) (1) Data frame handling I0324 00:00:40.820815 7 log.go:172] (0xc002461040) (1) Data frame sent I0324 00:00:40.820848 7 log.go:172] (0xc0024d22c0) (0xc002461040) Stream removed, broadcasting: 1 I0324 00:00:40.820881 7 log.go:172] (0xc0024d22c0) Go away received I0324 00:00:40.820995 7 log.go:172] (0xc0024d22c0) (0xc002461040) Stream removed, broadcasting: 1 I0324 00:00:40.821027 7 log.go:172] (0xc0024d22c0) (0xc000322f00) Stream removed, broadcasting: 3 I0324 00:00:40.821049 7 log.go:172] (0xc0024d22c0) (0xc0024610e0) Stream removed, broadcasting: 5 Mar 24 00:00:40.821: INFO: Found all expected endpoints: [netserver-0] Mar 24 00:00:40.824: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.64 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1669 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:00:40.825: INFO: >>> kubeConfig: /root/.kube/config I0324 00:00:40.860292 7 log.go:172] (0xc0023da370) (0xc0016ba780) Create stream I0324 00:00:40.860318 7 log.go:172] (0xc0023da370) (0xc0016ba780) Stream added, broadcasting: 1 I0324 00:00:40.862393 7 log.go:172] (0xc0023da370) Reply frame received for 1 I0324 00:00:40.862462 7 log.go:172] (0xc0023da370) (0xc000b9a140) Create stream I0324 00:00:40.862478 7 log.go:172] (0xc0023da370) (0xc000b9a140) Stream added, broadcasting: 3 I0324 00:00:40.863516 7 log.go:172] (0xc0023da370) Reply frame received for 3 I0324 00:00:40.863561 7 log.go:172] (0xc0023da370) (0xc0016baaa0) Create stream I0324 00:00:40.863577 7 log.go:172] (0xc0023da370) (0xc0016baaa0) Stream added, broadcasting: 5 I0324 00:00:40.864698 7 log.go:172] (0xc0023da370) Reply frame received for 5 I0324 00:00:41.954732 7 log.go:172] (0xc0023da370) Data frame received for 3 I0324 00:00:41.954764 7 log.go:172] (0xc000b9a140) (3) Data frame handling I0324 00:00:41.954797 7 log.go:172] (0xc000b9a140) (3) Data frame sent I0324 00:00:41.954818 7 log.go:172] (0xc0023da370) Data frame received for 3 I0324 00:00:41.954828 7 log.go:172] (0xc000b9a140) (3) Data frame handling I0324 00:00:41.956161 7 log.go:172] (0xc0023da370) Data frame received for 1 I0324 00:00:41.956242 7 log.go:172] (0xc0016ba780) (1) Data frame handling I0324 00:00:41.956269 7 log.go:172] (0xc0016ba780) (1) Data frame sent I0324 00:00:41.956291 7 log.go:172] (0xc0023da370) Data frame received for 5 I0324 00:00:41.956310 7 log.go:172] (0xc0016baaa0) (5) Data frame handling I0324 00:00:41.956414 7 log.go:172] (0xc0023da370) (0xc0016ba780) Stream removed, broadcasting: 1 I0324 00:00:41.956522 7 log.go:172] (0xc0023da370) (0xc0016ba780) Stream removed, broadcasting: 1 I0324 00:00:41.956555 7 log.go:172] (0xc0023da370) (0xc000b9a140) Stream removed, broadcasting: 3 I0324 00:00:41.956575 7 log.go:172] (0xc0023da370) (0xc0016baaa0) Stream removed, broadcasting: 5 Mar 24 00:00:41.956: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 I0324 00:00:41.956690 7 log.go:172] (0xc0023da370) Go away received Mar 24 00:00:41.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1669" for this suite. • [SLOW TEST:24.523 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:00:41.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2778 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2778 STEP: Deleting pre-stop pod Mar 24 00:00:55.133: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:00:55.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2778" for this suite. • [SLOW TEST:13.233 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":119,"skipped":1818,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:00:55.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 24 00:00:55.291: INFO: Waiting up to 5m0s for pod "pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac" in namespace "emptydir-5022" to be "Succeeded or Failed" Mar 24 00:00:55.412: INFO: Pod "pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac": Phase="Pending", Reason="", readiness=false. Elapsed: 121.222566ms Mar 24 00:00:57.437: INFO: Pod "pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14553401s Mar 24 00:00:59.440: INFO: Pod "pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149441741s STEP: Saw pod success Mar 24 00:00:59.440: INFO: Pod "pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac" satisfied condition "Succeeded or Failed" Mar 24 00:00:59.443: INFO: Trying to get logs from node latest-worker pod pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac container test-container: STEP: delete the pod Mar 24 00:00:59.487: INFO: Waiting for pod pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac to disappear Mar 24 00:00:59.495: INFO: Pod pod-1201f779-b34c-4eb4-b94b-d6730e7f67ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:00:59.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5022" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":120,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:00:59.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-e1b02e88-480d-42bb-bcc2-fd1f06add3cc STEP: Creating a pod to test consume configMaps Mar 24 00:00:59.571: INFO: Waiting up to 5m0s for pod "pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330" in namespace "configmap-5353" to be "Succeeded or Failed" Mar 24 00:00:59.574: INFO: Pod "pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330": Phase="Pending", Reason="", readiness=false. Elapsed: 3.249743ms Mar 24 00:01:01.579: INFO: Pod "pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007887387s Mar 24 00:01:03.583: INFO: Pod "pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012273706s STEP: Saw pod success Mar 24 00:01:03.583: INFO: Pod "pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330" satisfied condition "Succeeded or Failed" Mar 24 00:01:03.587: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330 container configmap-volume-test: STEP: delete the pod Mar 24 00:01:03.659: INFO: Waiting for pod pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330 to disappear Mar 24 00:01:03.664: INFO: Pod pod-configmaps-1eb1bbf5-9cc1-4696-9483-483b84442330 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:03.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5353" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1849,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:03.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:01:04.048: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:01:06.057: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604864, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604864, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604864, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604863, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:01:09.090: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:09.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5835" for this suite. STEP: Destroying namespace "webhook-5835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.561 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":122,"skipped":1856,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:09.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:20.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3427" for this suite. • [SLOW TEST:11.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":123,"skipped":1864,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:20.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:37.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8483" for this suite. • [SLOW TEST:17.106 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":124,"skipped":1897,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:37.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 24 00:01:37.576: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 24 00:01:37.585: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 24 00:01:37.586: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 24 00:01:37.607: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 24 00:01:37.607: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 24 00:01:37.634: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 24 00:01:37.634: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 24 00:01:44.883: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:44.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3786" for this suite. • [SLOW TEST:7.467 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":125,"skipped":1914,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:44.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-92802c84-3ef5-45a7-bda5-f3d417bc4f25 STEP: Creating a pod to test consume configMaps Mar 24 00:01:45.050: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555" in namespace "projected-774" to be "Succeeded or Failed" Mar 24 00:01:45.063: INFO: Pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555": Phase="Pending", Reason="", readiness=false. Elapsed: 12.258007ms Mar 24 00:01:47.066: INFO: Pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015922021s Mar 24 00:01:49.150: INFO: Pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555": Phase="Running", Reason="", readiness=true. Elapsed: 4.09911526s Mar 24 00:01:51.155: INFO: Pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104601313s STEP: Saw pod success Mar 24 00:01:51.155: INFO: Pod "pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555" satisfied condition "Succeeded or Failed" Mar 24 00:01:51.158: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555 container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:01:51.202: INFO: Waiting for pod pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555 to disappear Mar 24 00:01:51.253: INFO: Pod pod-projected-configmaps-c0e4a9a5-f094-40c8-88cf-fff14bc05555 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:01:51.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-774" for this suite. • [SLOW TEST:6.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":1914,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:01:51.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:01:53.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:01:55.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604913, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604913, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604913, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604913, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:01:58.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:02:10.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6044" for this suite. STEP: Destroying namespace "webhook-6044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.340 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":127,"skipped":1928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:02:10.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 24 00:02:10.703: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 24 00:02:11.415: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 24 00:02:13.629: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604931, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604931, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604931, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720604931, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:02:16.304: INFO: Waited 625.536211ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:02:17.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-8285" for this suite. • [SLOW TEST:6.549 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":128,"skipped":1952,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:02:17.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 24 00:02:17.399: INFO: Waiting up to 5m0s for pod "pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c" in namespace "emptydir-5718" to be "Succeeded or Failed" Mar 24 00:02:17.426: INFO: Pod "pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.875353ms Mar 24 00:02:19.437: INFO: Pod "pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038290917s Mar 24 00:02:21.755: INFO: Pod "pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.356021058s STEP: Saw pod success Mar 24 00:02:21.755: INFO: Pod "pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c" satisfied condition "Succeeded or Failed" Mar 24 00:02:21.774: INFO: Trying to get logs from node latest-worker2 pod pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c container test-container: STEP: delete the pod Mar 24 00:02:21.920: INFO: Waiting for pod pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c to disappear Mar 24 00:02:21.923: INFO: Pod pod-1b02b68f-f4e7-4968-8f02-acd6d242f31c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:02:21.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5718" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":1959,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:02:21.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:02:22.068: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"89c54551-2217-4db9-97d1-bfcd06283de3", Controller:(*bool)(0xc001fcde12), BlockOwnerDeletion:(*bool)(0xc001fcde13)}} Mar 24 00:02:22.106: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7accf21c-3a57-4803-82dc-e887f3bc508e", Controller:(*bool)(0xc001fcdfba), BlockOwnerDeletion:(*bool)(0xc001fcdfbb)}} Mar 24 00:02:22.111: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"bac68003-7bf0-478a-8c4b-b96240c82bea", Controller:(*bool)(0xc0022c5a72), BlockOwnerDeletion:(*bool)(0xc0022c5a73)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:02:27.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9456" for this suite. • [SLOW TEST:5.224 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":130,"skipped":1978,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:02:27.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 24 00:02:27.271: INFO: Waiting up to 5m0s for pod "downward-api-038fec6b-4d6b-48e2-92ed-806038f68533" in namespace "downward-api-8203" to be "Succeeded or Failed" Mar 24 00:02:27.290: INFO: Pod "downward-api-038fec6b-4d6b-48e2-92ed-806038f68533": Phase="Pending", Reason="", readiness=false. Elapsed: 18.562844ms Mar 24 00:02:29.294: INFO: Pod "downward-api-038fec6b-4d6b-48e2-92ed-806038f68533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023071678s Mar 24 00:02:31.299: INFO: Pod "downward-api-038fec6b-4d6b-48e2-92ed-806038f68533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028023802s STEP: Saw pod success Mar 24 00:02:31.299: INFO: Pod "downward-api-038fec6b-4d6b-48e2-92ed-806038f68533" satisfied condition "Succeeded or Failed" Mar 24 00:02:31.303: INFO: Trying to get logs from node latest-worker2 pod downward-api-038fec6b-4d6b-48e2-92ed-806038f68533 container dapi-container: STEP: delete the pod Mar 24 00:02:31.367: INFO: Waiting for pod downward-api-038fec6b-4d6b-48e2-92ed-806038f68533 to disappear Mar 24 00:02:31.372: INFO: Pod downward-api-038fec6b-4d6b-48e2-92ed-806038f68533 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:02:31.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8203" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":1984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:02:31.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 24 00:02:31.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 00:02:31.428: INFO: Waiting for terminating namespaces to be deleted... Mar 24 00:02:31.430: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 24 00:02:31.448: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:02:31.448: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:02:31.448: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:02:31.448: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 00:02:31.448: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 24 00:02:31.452: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:02:31.452: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:02:31.452: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:02:31.452: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-78455332-b22c-4d8a-8d8f-0406d1e7883d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-78455332-b22c-4d8a-8d8f-0406d1e7883d off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-78455332-b22c-4d8a-8d8f-0406d1e7883d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:07:39.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-11" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.237 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":132,"skipped":2027,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:07:39.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-9268d3a2-f494-403f-bf3f-c7c4dd78caa9 STEP: Creating a pod to test consume configMaps Mar 24 00:07:39.709: INFO: Waiting up to 5m0s for pod "pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440" in namespace "configmap-997" to be "Succeeded or Failed" Mar 24 00:07:39.731: INFO: Pod "pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440": Phase="Pending", Reason="", readiness=false. Elapsed: 21.326735ms Mar 24 00:07:41.747: INFO: Pod "pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037427302s Mar 24 00:07:43.751: INFO: Pod "pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041997403s STEP: Saw pod success Mar 24 00:07:43.751: INFO: Pod "pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440" satisfied condition "Succeeded or Failed" Mar 24 00:07:43.754: INFO: Trying to get logs from node latest-worker pod pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440 container configmap-volume-test: STEP: delete the pod Mar 24 00:07:43.800: INFO: Waiting for pod pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440 to disappear Mar 24 00:07:43.815: INFO: Pod pod-configmaps-29b6cebd-295a-46a7-bfb0-5fee07053440 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:07:43.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-997" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2027,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:07:43.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e62d425d-04c7-4959-95fe-66fd10f93cbb STEP: Creating a pod to test consume secrets Mar 24 00:07:43.937: INFO: Waiting up to 5m0s for pod "pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb" in namespace "secrets-9368" to be "Succeeded or Failed" Mar 24 00:07:43.941: INFO: Pod "pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.995703ms Mar 24 00:07:45.946: INFO: Pod "pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009335425s Mar 24 00:07:47.951: INFO: Pod "pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013769434s STEP: Saw pod success Mar 24 00:07:47.951: INFO: Pod "pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb" satisfied condition "Succeeded or Failed" Mar 24 00:07:47.954: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb container secret-volume-test: STEP: delete the pod Mar 24 00:07:47.992: INFO: Waiting for pod pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb to disappear Mar 24 00:07:47.995: INFO: Pod pod-secrets-a58dc2b0-05ab-4afb-8614-03a40e54f3eb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:07:47.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9368" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2040,"failed":0} SSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:07:48.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:07:48.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-9355" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":135,"skipped":2043,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:07:48.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:07:48.251: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 24 00:07:48.258: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 24 00:07:53.261: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 24 00:07:53.262: INFO: Creating deployment "test-rolling-update-deployment" Mar 24 00:07:53.265: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 24 00:07:53.272: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 24 00:07:55.279: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 24 00:07:55.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605273, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605273, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605273, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605273, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:07:57.320: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 24 00:07:57.330: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8661 /apis/apps/v1/namespaces/deployment-8661/deployments/test-rolling-update-deployment aa84c2c9-c223-4f57-8b68-acca67ebc7ba 2279451 1 2020-03-24 00:07:53 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00251eb48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-24 00:07:53 +0000 UTC,LastTransitionTime:2020-03-24 00:07:53 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-24 00:07:55 +0000 UTC,LastTransitionTime:2020-03-24 00:07:53 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 24 00:07:57.333: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-8661 /apis/apps/v1/namespaces/deployment-8661/replicasets/test-rolling-update-deployment-664dd8fc7f 36510764-fc20-46d5-a9fa-4bbbaba22c4f 2279440 1 2020-03-24 00:07:53 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment aa84c2c9-c223-4f57-8b68-acca67ebc7ba 0xc002c27567 0xc002c27568}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c275d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:07:57.333: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 24 00:07:57.334: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8661 /apis/apps/v1/namespaces/deployment-8661/replicasets/test-rolling-update-controller 0ab8a392-a9a2-4fb1-b0d8-18501e183a8b 2279450 2 2020-03-24 00:07:48 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment aa84c2c9-c223-4f57-8b68-acca67ebc7ba 0xc002c27497 0xc002c27498}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c274f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:07:57.337: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-8wv8l" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-8wv8l test-rolling-update-deployment-664dd8fc7f- deployment-8661 /api/v1/namespaces/deployment-8661/pods/test-rolling-update-deployment-664dd8fc7f-8wv8l d3102fd7-cf3f-40fe-85e1-28fd352c6c53 2279439 0 2020-03-24 00:07:53 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 36510764-fc20-46d5-a9fa-4bbbaba22c4f 0xc002c27cb7 0xc002c27cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4l5wm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4l5wm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4l5wm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:07:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:07:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:07:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:07:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.80,StartTime:2020-03-24 00:07:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-24 00:07:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://f333b5d1ce7948362c91a7934a016d3a30698c8353391d9afe4596a01caa175f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:07:57.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8661" for this suite. • [SLOW TEST:9.173 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":136,"skipped":2048,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:07:57.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4906.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.94.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.94.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.94.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.94.69_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4906.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4906.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4906.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4906.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.94.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.94.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.94.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.94.69_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:08:03.526: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.529: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.532: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.536: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.560: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.563: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.567: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.571: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:03.590: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:08.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.630: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.638: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:08.671: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:13.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.610: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.614: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.634: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.640: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.643: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:13.667: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:18.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.630: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.639: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:18.657: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:23.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.602: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.626: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.635: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:23.654: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:28.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.603: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.606: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.626: INFO: Unable to read jessie_udp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.632: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.634: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local from pod dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438: the server could not find the requested resource (get pods dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438) Mar 24 00:08:28.651: INFO: Lookups using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 failed for: [wheezy_udp@dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@dns-test-service.dns-4906.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_udp@dns-test-service.dns-4906.svc.cluster.local jessie_tcp@dns-test-service.dns-4906.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4906.svc.cluster.local] Mar 24 00:08:33.653: INFO: DNS probes using dns-4906/dns-test-23c01b7e-ed2f-43ea-874f-9ea388781438 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:08:34.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4906" for this suite. • [SLOW TEST:36.880 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":137,"skipped":2050,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:08:34.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9463 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 00:08:34.284: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 24 00:08:34.358: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:08:36.556: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:08:38.364: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:40.361: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:42.362: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:44.362: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:46.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:48.362: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:50.362: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:52.362: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:08:54.362: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 24 00:08:54.368: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 24 00:08:58.389: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostname&protocol=udp&host=10.244.2.192&port=8081&tries=1'] Namespace:pod-network-test-9463 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:08:58.389: INFO: >>> kubeConfig: /root/.kube/config I0324 00:08:58.428324 7 log.go:172] (0xc0024d2370) (0xc0029474a0) Create stream I0324 00:08:58.428360 7 log.go:172] (0xc0024d2370) (0xc0029474a0) Stream added, broadcasting: 1 I0324 00:08:58.431578 7 log.go:172] (0xc0024d2370) Reply frame received for 1 I0324 00:08:58.431645 7 log.go:172] (0xc0024d2370) (0xc001e061e0) Create stream I0324 00:08:58.431683 7 log.go:172] (0xc0024d2370) (0xc001e061e0) Stream added, broadcasting: 3 I0324 00:08:58.432800 7 log.go:172] (0xc0024d2370) Reply frame received for 3 I0324 00:08:58.432829 7 log.go:172] (0xc0024d2370) (0xc002947540) Create stream I0324 00:08:58.432847 7 log.go:172] (0xc0024d2370) (0xc002947540) Stream added, broadcasting: 5 I0324 00:08:58.433964 7 log.go:172] (0xc0024d2370) Reply frame received for 5 I0324 00:08:58.516705 7 log.go:172] (0xc0024d2370) Data frame received for 3 I0324 00:08:58.516738 7 log.go:172] (0xc001e061e0) (3) Data frame handling I0324 00:08:58.516774 7 log.go:172] (0xc001e061e0) (3) Data frame sent I0324 00:08:58.517643 7 log.go:172] (0xc0024d2370) Data frame received for 5 I0324 00:08:58.517681 7 log.go:172] (0xc002947540) (5) Data frame handling I0324 00:08:58.517745 7 log.go:172] (0xc0024d2370) Data frame received for 3 I0324 00:08:58.517788 7 log.go:172] (0xc001e061e0) (3) Data frame handling I0324 00:08:58.519604 7 log.go:172] (0xc0024d2370) Data frame received for 1 I0324 00:08:58.519636 7 log.go:172] (0xc0029474a0) (1) Data frame handling I0324 00:08:58.519673 7 log.go:172] (0xc0029474a0) (1) Data frame sent I0324 00:08:58.519701 7 log.go:172] (0xc0024d2370) (0xc0029474a0) Stream removed, broadcasting: 1 I0324 00:08:58.519853 7 log.go:172] (0xc0024d2370) (0xc0029474a0) Stream removed, broadcasting: 1 I0324 00:08:58.519877 7 log.go:172] (0xc0024d2370) (0xc001e061e0) Stream removed, broadcasting: 3 I0324 00:08:58.519944 7 log.go:172] (0xc0024d2370) Go away received I0324 00:08:58.520074 7 log.go:172] (0xc0024d2370) (0xc002947540) Stream removed, broadcasting: 5 Mar 24 00:08:58.520: INFO: Waiting for responses: map[] Mar 24 00:08:58.524: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.82:8080/dial?request=hostname&protocol=udp&host=10.244.1.81&port=8081&tries=1'] Namespace:pod-network-test-9463 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:08:58.524: INFO: >>> kubeConfig: /root/.kube/config I0324 00:08:58.546932 7 log.go:172] (0xc0029b22c0) (0xc0022be6e0) Create stream I0324 00:08:58.546964 7 log.go:172] (0xc0029b22c0) (0xc0022be6e0) Stream added, broadcasting: 1 I0324 00:08:58.549638 7 log.go:172] (0xc0029b22c0) Reply frame received for 1 I0324 00:08:58.549677 7 log.go:172] (0xc0029b22c0) (0xc0029475e0) Create stream I0324 00:08:58.549690 7 log.go:172] (0xc0029b22c0) (0xc0029475e0) Stream added, broadcasting: 3 I0324 00:08:58.550670 7 log.go:172] (0xc0029b22c0) Reply frame received for 3 I0324 00:08:58.550708 7 log.go:172] (0xc0029b22c0) (0xc002a6d0e0) Create stream I0324 00:08:58.550721 7 log.go:172] (0xc0029b22c0) (0xc002a6d0e0) Stream added, broadcasting: 5 I0324 00:08:58.551601 7 log.go:172] (0xc0029b22c0) Reply frame received for 5 I0324 00:08:58.626879 7 log.go:172] (0xc0029b22c0) Data frame received for 3 I0324 00:08:58.626925 7 log.go:172] (0xc0029475e0) (3) Data frame handling I0324 00:08:58.626960 7 log.go:172] (0xc0029475e0) (3) Data frame sent I0324 00:08:58.627485 7 log.go:172] (0xc0029b22c0) Data frame received for 3 I0324 00:08:58.627514 7 log.go:172] (0xc0029475e0) (3) Data frame handling I0324 00:08:58.627536 7 log.go:172] (0xc0029b22c0) Data frame received for 5 I0324 00:08:58.627550 7 log.go:172] (0xc002a6d0e0) (5) Data frame handling I0324 00:08:58.629407 7 log.go:172] (0xc0029b22c0) Data frame received for 1 I0324 00:08:58.629435 7 log.go:172] (0xc0022be6e0) (1) Data frame handling I0324 00:08:58.629456 7 log.go:172] (0xc0022be6e0) (1) Data frame sent I0324 00:08:58.629481 7 log.go:172] (0xc0029b22c0) (0xc0022be6e0) Stream removed, broadcasting: 1 I0324 00:08:58.629609 7 log.go:172] (0xc0029b22c0) (0xc0022be6e0) Stream removed, broadcasting: 1 I0324 00:08:58.629636 7 log.go:172] (0xc0029b22c0) (0xc0029475e0) Stream removed, broadcasting: 3 I0324 00:08:58.629795 7 log.go:172] (0xc0029b22c0) Go away received I0324 00:08:58.629996 7 log.go:172] (0xc0029b22c0) (0xc002a6d0e0) Stream removed, broadcasting: 5 Mar 24 00:08:58.630: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:08:58.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9463" for this suite. • [SLOW TEST:24.425 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2077,"failed":0} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:08:58.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-90bf0884-26be-4d0b-95c1-c9737c8f6283 STEP: Creating a pod to test consume configMaps Mar 24 00:08:58.717: INFO: Waiting up to 5m0s for pod "pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515" in namespace "configmap-6317" to be "Succeeded or Failed" Mar 24 00:08:58.721: INFO: Pod "pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515": Phase="Pending", Reason="", readiness=false. Elapsed: 3.929045ms Mar 24 00:09:00.724: INFO: Pod "pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007728137s Mar 24 00:09:02.728: INFO: Pod "pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011765498s STEP: Saw pod success Mar 24 00:09:02.728: INFO: Pod "pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515" satisfied condition "Succeeded or Failed" Mar 24 00:09:02.731: INFO: Trying to get logs from node latest-worker pod pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515 container configmap-volume-test: STEP: delete the pod Mar 24 00:09:02.787: INFO: Waiting for pod pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515 to disappear Mar 24 00:09:02.792: INFO: Pod pod-configmaps-94c54b31-c49b-4c0e-8339-675d72eeb515 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:09:02.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6317" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2077,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:09:02.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 24 00:09:02.843: INFO: >>> kubeConfig: /root/.kube/config Mar 24 00:09:04.773: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:09:15.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2008" for this suite. • [SLOW TEST:12.569 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":140,"skipped":2087,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:09:15.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0324 00:09:39.516197 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 00:09:39.516: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:09:39.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1157" for this suite. • [SLOW TEST:24.150 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":141,"skipped":2123,"failed":0} SSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:09:39.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-1aefab21-9ef6-499f-a84f-9e257b557094 in namespace container-probe-3821 Mar 24 00:09:43.595: INFO: Started pod liveness-1aefab21-9ef6-499f-a84f-9e257b557094 in namespace container-probe-3821 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 00:09:43.598: INFO: Initial restart count of pod liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is 0 Mar 24 00:09:57.644: INFO: Restart count of pod container-probe-3821/liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is now 1 (14.045659352s elapsed) Mar 24 00:10:17.698: INFO: Restart count of pod container-probe-3821/liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is now 2 (34.099602413s elapsed) Mar 24 00:10:37.751: INFO: Restart count of pod container-probe-3821/liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is now 3 (54.152906615s elapsed) Mar 24 00:10:57.791: INFO: Restart count of pod container-probe-3821/liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is now 4 (1m14.193457034s elapsed) Mar 24 00:12:07.940: INFO: Restart count of pod container-probe-3821/liveness-1aefab21-9ef6-499f-a84f-9e257b557094 is now 5 (2m24.342056548s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:12:07.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3821" for this suite. • [SLOW TEST:148.436 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2126,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:12:07.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:12:08.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990" in namespace "projected-5252" to be "Succeeded or Failed" Mar 24 00:12:08.028: INFO: Pod "downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497733ms Mar 24 00:12:10.056: INFO: Pod "downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034228631s Mar 24 00:12:12.060: INFO: Pod "downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038736768s STEP: Saw pod success Mar 24 00:12:12.061: INFO: Pod "downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990" satisfied condition "Succeeded or Failed" Mar 24 00:12:12.064: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990 container client-container: STEP: delete the pod Mar 24 00:12:12.090: INFO: Waiting for pod downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990 to disappear Mar 24 00:12:12.094: INFO: Pod downwardapi-volume-583faf79-6509-480a-abf8-b2b4c3a2c990 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:12:12.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5252" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2128,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:12:12.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:12:12.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 24 00:12:12.735: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:12Z generation:1 name:name1 resourceVersion:2280516 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dbe476b5-9f40-46be-b8da-6aace31cea48] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 24 00:12:22.741: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:22Z generation:1 name:name2 resourceVersion:2280566 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3b413a6b-a9cc-424e-9cdb-3e2f8a259fe4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 24 00:12:32.757: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:12Z generation:2 name:name1 resourceVersion:2280596 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dbe476b5-9f40-46be-b8da-6aace31cea48] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 24 00:12:42.762: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:22Z generation:2 name:name2 resourceVersion:2280626 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3b413a6b-a9cc-424e-9cdb-3e2f8a259fe4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 24 00:12:52.770: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:12Z generation:2 name:name1 resourceVersion:2280656 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:dbe476b5-9f40-46be-b8da-6aace31cea48] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 24 00:13:02.776: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-24T00:12:22Z generation:2 name:name2 resourceVersion:2280681 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:3b413a6b-a9cc-424e-9cdb-3e2f8a259fe4] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:13:13.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9418" for this suite. • [SLOW TEST:61.194 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":144,"skipped":2141,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:13:13.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:13:14.142: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:13:16.153: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605594, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605594, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605594, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720605594, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:13:19.185: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:13:19.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5392" for this suite. STEP: Destroying namespace "webhook-5392-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.095 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":145,"skipped":2144,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:13:19.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 24 00:13:19.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4604' Mar 24 00:13:23.027: INFO: stderr: "" Mar 24 00:13:23.027: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 24 00:13:24.031: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:13:24.031: INFO: Found 0 / 1 Mar 24 00:13:25.030: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:13:25.030: INFO: Found 0 / 1 Mar 24 00:13:26.031: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:13:26.031: INFO: Found 1 / 1 Mar 24 00:13:26.031: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 24 00:13:26.034: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:13:26.034: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 24 00:13:26.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-s6nls --namespace=kubectl-4604 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 24 00:13:26.133: INFO: stderr: "" Mar 24 00:13:26.134: INFO: stdout: "pod/agnhost-master-s6nls patched\n" STEP: checking annotations Mar 24 00:13:26.137: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:13:26.137: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:13:26.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4604" for this suite. • [SLOW TEST:6.755 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":146,"skipped":2183,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:13:26.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 00:13:30.253: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:13:30.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6757" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2186,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:13:30.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:13:30.459: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 24 00:13:30.469: INFO: Number of nodes with available pods: 0 Mar 24 00:13:30.469: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 24 00:13:30.519: INFO: Number of nodes with available pods: 0 Mar 24 00:13:30.519: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:31.530: INFO: Number of nodes with available pods: 0 Mar 24 00:13:31.530: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:32.522: INFO: Number of nodes with available pods: 0 Mar 24 00:13:32.522: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:33.548: INFO: Number of nodes with available pods: 0 Mar 24 00:13:33.548: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:34.522: INFO: Number of nodes with available pods: 1 Mar 24 00:13:34.522: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 24 00:13:34.546: INFO: Number of nodes with available pods: 1 Mar 24 00:13:34.546: INFO: Number of running nodes: 0, number of available pods: 1 Mar 24 00:13:35.549: INFO: Number of nodes with available pods: 0 Mar 24 00:13:35.549: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 24 00:13:35.591: INFO: Number of nodes with available pods: 0 Mar 24 00:13:35.591: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:36.599: INFO: Number of nodes with available pods: 0 Mar 24 00:13:36.599: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:37.603: INFO: Number of nodes with available pods: 0 Mar 24 00:13:37.603: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:38.594: INFO: Number of nodes with available pods: 0 Mar 24 00:13:38.594: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:39.594: INFO: Number of nodes with available pods: 0 Mar 24 00:13:39.594: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:40.596: INFO: Number of nodes with available pods: 0 Mar 24 00:13:40.596: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:41.595: INFO: Number of nodes with available pods: 0 Mar 24 00:13:41.595: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:42.594: INFO: Number of nodes with available pods: 0 Mar 24 00:13:42.594: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:43.603: INFO: Number of nodes with available pods: 0 Mar 24 00:13:43.603: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:44.795: INFO: Number of nodes with available pods: 0 Mar 24 00:13:44.795: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:13:45.663: INFO: Number of nodes with available pods: 1 Mar 24 00:13:45.663: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3099, will wait for the garbage collector to delete the pods Mar 24 00:13:45.748: INFO: Deleting DaemonSet.extensions daemon-set took: 6.182072ms Mar 24 00:13:46.048: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278232ms Mar 24 00:13:49.653: INFO: Number of nodes with available pods: 0 Mar 24 00:13:49.653: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 00:13:49.656: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3099/daemonsets","resourceVersion":"2281008"},"items":null} Mar 24 00:13:49.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3099/pods","resourceVersion":"2281008"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:13:49.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3099" for this suite. • [SLOW TEST:19.407 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":148,"skipped":2199,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:13:49.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0324 00:14:00.915375 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 00:14:00.915: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:14:00.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8956" for this suite. • [SLOW TEST:11.204 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":149,"skipped":2212,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:14:00.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-4724/configmap-test-ad02344f-bd33-4081-977a-b2c2a6662577 STEP: Creating a pod to test consume configMaps Mar 24 00:14:01.077: INFO: Waiting up to 5m0s for pod "pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a" in namespace "configmap-4724" to be "Succeeded or Failed" Mar 24 00:14:01.093: INFO: Pod "pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.897944ms Mar 24 00:14:03.147: INFO: Pod "pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069832777s Mar 24 00:14:05.152: INFO: Pod "pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074118057s STEP: Saw pod success Mar 24 00:14:05.152: INFO: Pod "pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a" satisfied condition "Succeeded or Failed" Mar 24 00:14:05.155: INFO: Trying to get logs from node latest-worker pod pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a container env-test: STEP: delete the pod Mar 24 00:14:05.185: INFO: Waiting for pod pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a to disappear Mar 24 00:14:05.207: INFO: Pod pod-configmaps-eef1a0d5-ec3e-48d2-83a9-4d2306fe3d3a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:14:05.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4724" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:14:05.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-0287346e-1884-4318-8876-eab6fdfd7fb6 STEP: Creating a pod to test consume configMaps Mar 24 00:14:05.286: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f" in namespace "configmap-7260" to be "Succeeded or Failed" Mar 24 00:14:05.297: INFO: Pod "pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.748812ms Mar 24 00:14:07.346: INFO: Pod "pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059943838s Mar 24 00:14:09.350: INFO: Pod "pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064022944s STEP: Saw pod success Mar 24 00:14:09.350: INFO: Pod "pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f" satisfied condition "Succeeded or Failed" Mar 24 00:14:09.354: INFO: Trying to get logs from node latest-worker pod pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f container configmap-volume-test: STEP: delete the pod Mar 24 00:14:09.550: INFO: Waiting for pod pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f to disappear Mar 24 00:14:09.680: INFO: Pod pod-configmaps-c6e0212c-b2cc-4a09-a012-e164cd66115f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:14:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7260" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:14:09.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4047 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-4047 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4047 Mar 24 00:14:09.864: INFO: Found 0 stateful pods, waiting for 1 Mar 24 00:14:19.868: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 24 00:14:19.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:14:20.171: INFO: stderr: "I0324 00:14:20.002375 679 log.go:172] (0xc0008fb6b0) (0xc000b9e3c0) Create stream\nI0324 00:14:20.002433 679 log.go:172] (0xc0008fb6b0) (0xc000b9e3c0) Stream added, broadcasting: 1\nI0324 00:14:20.005279 679 log.go:172] (0xc0008fb6b0) Reply frame received for 1\nI0324 00:14:20.005315 679 log.go:172] (0xc0008fb6b0) (0xc000bba0a0) Create stream\nI0324 00:14:20.005325 679 log.go:172] (0xc0008fb6b0) (0xc000bba0a0) Stream added, broadcasting: 3\nI0324 00:14:20.006305 679 log.go:172] (0xc0008fb6b0) Reply frame received for 3\nI0324 00:14:20.006338 679 log.go:172] (0xc0008fb6b0) (0xc0008dc3c0) Create stream\nI0324 00:14:20.006351 679 log.go:172] (0xc0008fb6b0) (0xc0008dc3c0) Stream added, broadcasting: 5\nI0324 00:14:20.007183 679 log.go:172] (0xc0008fb6b0) Reply frame received for 5\nI0324 00:14:20.088187 679 log.go:172] (0xc0008fb6b0) Data frame received for 5\nI0324 00:14:20.088222 679 log.go:172] (0xc0008dc3c0) (5) Data frame handling\nI0324 00:14:20.088250 679 log.go:172] (0xc0008dc3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:14:20.164311 679 log.go:172] (0xc0008fb6b0) Data frame received for 3\nI0324 00:14:20.164344 679 log.go:172] (0xc000bba0a0) (3) Data frame handling\nI0324 00:14:20.164353 679 log.go:172] (0xc000bba0a0) (3) Data frame sent\nI0324 00:14:20.164368 679 log.go:172] (0xc0008fb6b0) Data frame received for 3\nI0324 00:14:20.164387 679 log.go:172] (0xc0008fb6b0) Data frame received for 5\nI0324 00:14:20.164404 679 log.go:172] (0xc0008dc3c0) (5) Data frame handling\nI0324 00:14:20.164423 679 log.go:172] (0xc000bba0a0) (3) Data frame handling\nI0324 00:14:20.166288 679 log.go:172] (0xc0008fb6b0) Data frame received for 1\nI0324 00:14:20.166331 679 log.go:172] (0xc000b9e3c0) (1) Data frame handling\nI0324 00:14:20.166353 679 log.go:172] (0xc000b9e3c0) (1) Data frame sent\nI0324 00:14:20.166368 679 log.go:172] (0xc0008fb6b0) (0xc000b9e3c0) Stream removed, broadcasting: 1\nI0324 00:14:20.166387 679 log.go:172] (0xc0008fb6b0) Go away received\nI0324 00:14:20.166799 679 log.go:172] (0xc0008fb6b0) (0xc000b9e3c0) Stream removed, broadcasting: 1\nI0324 00:14:20.166836 679 log.go:172] (0xc0008fb6b0) (0xc000bba0a0) Stream removed, broadcasting: 3\nI0324 00:14:20.166865 679 log.go:172] (0xc0008fb6b0) (0xc0008dc3c0) Stream removed, broadcasting: 5\n" Mar 24 00:14:20.171: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:14:20.171: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 24 00:14:20.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 24 00:14:30.179: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 00:14:30.179: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:14:30.196: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:14:30.197: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:14:30.197: INFO: Mar 24 00:14:30.197: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 24 00:14:31.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992036312s Mar 24 00:14:32.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9859259s Mar 24 00:14:33.256: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981123035s Mar 24 00:14:34.262: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.932669089s Mar 24 00:14:35.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.927144281s Mar 24 00:14:36.271: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.922646004s Mar 24 00:14:37.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.918037944s Mar 24 00:14:38.280: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.913234552s Mar 24 00:14:39.285: INFO: Verifying statefulset ss doesn't scale past 3 for another 908.457982ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4047 Mar 24 00:14:40.291: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:14:40.516: INFO: stderr: "I0324 00:14:40.422271 699 log.go:172] (0xc000ae2000) (0xc0007b9360) Create stream\nI0324 00:14:40.422330 699 log.go:172] (0xc000ae2000) (0xc0007b9360) Stream added, broadcasting: 1\nI0324 00:14:40.424852 699 log.go:172] (0xc000ae2000) Reply frame received for 1\nI0324 00:14:40.424895 699 log.go:172] (0xc000ae2000) (0xc0009d4000) Create stream\nI0324 00:14:40.424909 699 log.go:172] (0xc000ae2000) (0xc0009d4000) Stream added, broadcasting: 3\nI0324 00:14:40.425923 699 log.go:172] (0xc000ae2000) Reply frame received for 3\nI0324 00:14:40.425946 699 log.go:172] (0xc000ae2000) (0xc0007b9540) Create stream\nI0324 00:14:40.425953 699 log.go:172] (0xc000ae2000) (0xc0007b9540) Stream added, broadcasting: 5\nI0324 00:14:40.426891 699 log.go:172] (0xc000ae2000) Reply frame received for 5\nI0324 00:14:40.509522 699 log.go:172] (0xc000ae2000) Data frame received for 3\nI0324 00:14:40.509544 699 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0324 00:14:40.509552 699 log.go:172] (0xc0009d4000) (3) Data frame sent\nI0324 00:14:40.509561 699 log.go:172] (0xc000ae2000) Data frame received for 3\nI0324 00:14:40.509570 699 log.go:172] (0xc0009d4000) (3) Data frame handling\nI0324 00:14:40.509923 699 log.go:172] (0xc000ae2000) Data frame received for 5\nI0324 00:14:40.509948 699 log.go:172] (0xc0007b9540) (5) Data frame handling\nI0324 00:14:40.509967 699 log.go:172] (0xc0007b9540) (5) Data frame sent\nI0324 00:14:40.509979 699 log.go:172] (0xc000ae2000) Data frame received for 5\nI0324 00:14:40.509989 699 log.go:172] (0xc0007b9540) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0324 00:14:40.511701 699 log.go:172] (0xc000ae2000) Data frame received for 1\nI0324 00:14:40.511734 699 log.go:172] (0xc0007b9360) (1) Data frame handling\nI0324 00:14:40.511769 699 log.go:172] (0xc0007b9360) (1) Data frame sent\nI0324 00:14:40.511786 699 log.go:172] (0xc000ae2000) (0xc0007b9360) Stream removed, broadcasting: 1\nI0324 00:14:40.511838 699 log.go:172] (0xc000ae2000) Go away received\nI0324 00:14:40.512196 699 log.go:172] (0xc000ae2000) (0xc0007b9360) Stream removed, broadcasting: 1\nI0324 00:14:40.512222 699 log.go:172] (0xc000ae2000) (0xc0009d4000) Stream removed, broadcasting: 3\nI0324 00:14:40.512234 699 log.go:172] (0xc000ae2000) (0xc0007b9540) Stream removed, broadcasting: 5\n" Mar 24 00:14:40.517: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 24 00:14:40.517: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 24 00:14:40.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:14:40.725: INFO: stderr: "I0324 00:14:40.656657 721 log.go:172] (0xc000585a20) (0xc0006b75e0) Create stream\nI0324 00:14:40.656717 721 log.go:172] (0xc000585a20) (0xc0006b75e0) Stream added, broadcasting: 1\nI0324 00:14:40.659542 721 log.go:172] (0xc000585a20) Reply frame received for 1\nI0324 00:14:40.659584 721 log.go:172] (0xc000585a20) (0xc0006b7680) Create stream\nI0324 00:14:40.659597 721 log.go:172] (0xc000585a20) (0xc0006b7680) Stream added, broadcasting: 3\nI0324 00:14:40.660663 721 log.go:172] (0xc000585a20) Reply frame received for 3\nI0324 00:14:40.660708 721 log.go:172] (0xc000585a20) (0xc0005b55e0) Create stream\nI0324 00:14:40.660722 721 log.go:172] (0xc000585a20) (0xc0005b55e0) Stream added, broadcasting: 5\nI0324 00:14:40.661914 721 log.go:172] (0xc000585a20) Reply frame received for 5\nI0324 00:14:40.718850 721 log.go:172] (0xc000585a20) Data frame received for 3\nI0324 00:14:40.718872 721 log.go:172] (0xc0006b7680) (3) Data frame handling\nI0324 00:14:40.718880 721 log.go:172] (0xc0006b7680) (3) Data frame sent\nI0324 00:14:40.718886 721 log.go:172] (0xc000585a20) Data frame received for 3\nI0324 00:14:40.718890 721 log.go:172] (0xc0006b7680) (3) Data frame handling\nI0324 00:14:40.718921 721 log.go:172] (0xc000585a20) Data frame received for 5\nI0324 00:14:40.718957 721 log.go:172] (0xc0005b55e0) (5) Data frame handling\nI0324 00:14:40.718978 721 log.go:172] (0xc0005b55e0) (5) Data frame sent\nI0324 00:14:40.718994 721 log.go:172] (0xc000585a20) Data frame received for 5\nI0324 00:14:40.719010 721 log.go:172] (0xc0005b55e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0324 00:14:40.720619 721 log.go:172] (0xc000585a20) Data frame received for 1\nI0324 00:14:40.720644 721 log.go:172] (0xc0006b75e0) (1) Data frame handling\nI0324 00:14:40.720664 721 log.go:172] (0xc0006b75e0) (1) Data frame sent\nI0324 00:14:40.720828 721 log.go:172] (0xc000585a20) (0xc0006b75e0) Stream removed, broadcasting: 1\nI0324 00:14:40.720869 721 log.go:172] (0xc000585a20) Go away received\nI0324 00:14:40.721195 721 log.go:172] (0xc000585a20) (0xc0006b75e0) Stream removed, broadcasting: 1\nI0324 00:14:40.721211 721 log.go:172] (0xc000585a20) (0xc0006b7680) Stream removed, broadcasting: 3\nI0324 00:14:40.721216 721 log.go:172] (0xc000585a20) (0xc0005b55e0) Stream removed, broadcasting: 5\n" Mar 24 00:14:40.725: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 24 00:14:40.725: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 24 00:14:40.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:14:40.939: INFO: stderr: "I0324 00:14:40.865728 743 log.go:172] (0xc00003a580) (0xc0007a61e0) Create stream\nI0324 00:14:40.865780 743 log.go:172] (0xc00003a580) (0xc0007a61e0) Stream added, broadcasting: 1\nI0324 00:14:40.868234 743 log.go:172] (0xc00003a580) Reply frame received for 1\nI0324 00:14:40.868292 743 log.go:172] (0xc00003a580) (0xc00073fa40) Create stream\nI0324 00:14:40.868315 743 log.go:172] (0xc00003a580) (0xc00073fa40) Stream added, broadcasting: 3\nI0324 00:14:40.869791 743 log.go:172] (0xc00003a580) Reply frame received for 3\nI0324 00:14:40.869874 743 log.go:172] (0xc00003a580) (0xc00087a000) Create stream\nI0324 00:14:40.869905 743 log.go:172] (0xc00003a580) (0xc00087a000) Stream added, broadcasting: 5\nI0324 00:14:40.870996 743 log.go:172] (0xc00003a580) Reply frame received for 5\nI0324 00:14:40.932750 743 log.go:172] (0xc00003a580) Data frame received for 5\nI0324 00:14:40.932794 743 log.go:172] (0xc00087a000) (5) Data frame handling\nI0324 00:14:40.932808 743 log.go:172] (0xc00087a000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0324 00:14:40.932833 743 log.go:172] (0xc00003a580) Data frame received for 3\nI0324 00:14:40.932869 743 log.go:172] (0xc00073fa40) (3) Data frame handling\nI0324 00:14:40.932893 743 log.go:172] (0xc00073fa40) (3) Data frame sent\nI0324 00:14:40.932916 743 log.go:172] (0xc00003a580) Data frame received for 3\nI0324 00:14:40.932927 743 log.go:172] (0xc00073fa40) (3) Data frame handling\nI0324 00:14:40.932992 743 log.go:172] (0xc00003a580) Data frame received for 5\nI0324 00:14:40.933029 743 log.go:172] (0xc00087a000) (5) Data frame handling\nI0324 00:14:40.934971 743 log.go:172] (0xc00003a580) Data frame received for 1\nI0324 00:14:40.934996 743 log.go:172] (0xc0007a61e0) (1) Data frame handling\nI0324 00:14:40.935010 743 log.go:172] (0xc0007a61e0) (1) Data frame sent\nI0324 00:14:40.935050 743 log.go:172] (0xc00003a580) (0xc0007a61e0) Stream removed, broadcasting: 1\nI0324 00:14:40.935073 743 log.go:172] (0xc00003a580) Go away received\nI0324 00:14:40.935581 743 log.go:172] (0xc00003a580) (0xc0007a61e0) Stream removed, broadcasting: 1\nI0324 00:14:40.935605 743 log.go:172] (0xc00003a580) (0xc00073fa40) Stream removed, broadcasting: 3\nI0324 00:14:40.935620 743 log.go:172] (0xc00003a580) (0xc00087a000) Stream removed, broadcasting: 5\n" Mar 24 00:14:40.939: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 24 00:14:40.939: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 24 00:14:40.943: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 24 00:14:50.948: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:14:50.949: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:14:50.949: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 24 00:14:50.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:14:51.169: INFO: stderr: "I0324 00:14:51.086768 764 log.go:172] (0xc0000eef20) (0xc000a0e0a0) Create stream\nI0324 00:14:51.086825 764 log.go:172] (0xc0000eef20) (0xc000a0e0a0) Stream added, broadcasting: 1\nI0324 00:14:51.089753 764 log.go:172] (0xc0000eef20) Reply frame received for 1\nI0324 00:14:51.089794 764 log.go:172] (0xc0000eef20) (0xc000a0e140) Create stream\nI0324 00:14:51.089812 764 log.go:172] (0xc0000eef20) (0xc000a0e140) Stream added, broadcasting: 3\nI0324 00:14:51.090835 764 log.go:172] (0xc0000eef20) Reply frame received for 3\nI0324 00:14:51.090874 764 log.go:172] (0xc0000eef20) (0xc000a0e280) Create stream\nI0324 00:14:51.090894 764 log.go:172] (0xc0000eef20) (0xc000a0e280) Stream added, broadcasting: 5\nI0324 00:14:51.091812 764 log.go:172] (0xc0000eef20) Reply frame received for 5\nI0324 00:14:51.163761 764 log.go:172] (0xc0000eef20) Data frame received for 3\nI0324 00:14:51.163800 764 log.go:172] (0xc000a0e140) (3) Data frame handling\nI0324 00:14:51.163816 764 log.go:172] (0xc000a0e140) (3) Data frame sent\nI0324 00:14:51.163830 764 log.go:172] (0xc0000eef20) Data frame received for 3\nI0324 00:14:51.163841 764 log.go:172] (0xc000a0e140) (3) Data frame handling\nI0324 00:14:51.163894 764 log.go:172] (0xc0000eef20) Data frame received for 5\nI0324 00:14:51.163953 764 log.go:172] (0xc000a0e280) (5) Data frame handling\nI0324 00:14:51.163988 764 log.go:172] (0xc000a0e280) (5) Data frame sent\nI0324 00:14:51.164015 764 log.go:172] (0xc0000eef20) Data frame received for 5\nI0324 00:14:51.164034 764 log.go:172] (0xc000a0e280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:14:51.165532 764 log.go:172] (0xc0000eef20) Data frame received for 1\nI0324 00:14:51.165557 764 log.go:172] (0xc000a0e0a0) (1) Data frame handling\nI0324 00:14:51.165566 764 log.go:172] (0xc000a0e0a0) (1) Data frame sent\nI0324 00:14:51.165582 764 log.go:172] (0xc0000eef20) (0xc000a0e0a0) Stream removed, broadcasting: 1\nI0324 00:14:51.165646 764 log.go:172] (0xc0000eef20) Go away received\nI0324 00:14:51.165860 764 log.go:172] (0xc0000eef20) (0xc000a0e0a0) Stream removed, broadcasting: 1\nI0324 00:14:51.165876 764 log.go:172] (0xc0000eef20) (0xc000a0e140) Stream removed, broadcasting: 3\nI0324 00:14:51.165890 764 log.go:172] (0xc0000eef20) (0xc000a0e280) Stream removed, broadcasting: 5\n" Mar 24 00:14:51.169: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:14:51.169: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 24 00:14:51.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:14:51.386: INFO: stderr: "I0324 00:14:51.297404 786 log.go:172] (0xc00093d810) (0xc000ac0960) Create stream\nI0324 00:14:51.297526 786 log.go:172] (0xc00093d810) (0xc000ac0960) Stream added, broadcasting: 1\nI0324 00:14:51.302489 786 log.go:172] (0xc00093d810) Reply frame received for 1\nI0324 00:14:51.302532 786 log.go:172] (0xc00093d810) (0xc0006677c0) Create stream\nI0324 00:14:51.302543 786 log.go:172] (0xc00093d810) (0xc0006677c0) Stream added, broadcasting: 3\nI0324 00:14:51.303564 786 log.go:172] (0xc00093d810) Reply frame received for 3\nI0324 00:14:51.303620 786 log.go:172] (0xc00093d810) (0xc000476be0) Create stream\nI0324 00:14:51.303636 786 log.go:172] (0xc00093d810) (0xc000476be0) Stream added, broadcasting: 5\nI0324 00:14:51.304715 786 log.go:172] (0xc00093d810) Reply frame received for 5\nI0324 00:14:51.351142 786 log.go:172] (0xc00093d810) Data frame received for 5\nI0324 00:14:51.351173 786 log.go:172] (0xc000476be0) (5) Data frame handling\nI0324 00:14:51.351192 786 log.go:172] (0xc000476be0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:14:51.380334 786 log.go:172] (0xc00093d810) Data frame received for 3\nI0324 00:14:51.380367 786 log.go:172] (0xc0006677c0) (3) Data frame handling\nI0324 00:14:51.380407 786 log.go:172] (0xc0006677c0) (3) Data frame sent\nI0324 00:14:51.380417 786 log.go:172] (0xc00093d810) Data frame received for 3\nI0324 00:14:51.380427 786 log.go:172] (0xc0006677c0) (3) Data frame handling\nI0324 00:14:51.380711 786 log.go:172] (0xc00093d810) Data frame received for 5\nI0324 00:14:51.380733 786 log.go:172] (0xc000476be0) (5) Data frame handling\nI0324 00:14:51.382834 786 log.go:172] (0xc00093d810) Data frame received for 1\nI0324 00:14:51.382852 786 log.go:172] (0xc000ac0960) (1) Data frame handling\nI0324 00:14:51.382859 786 log.go:172] (0xc000ac0960) (1) Data frame sent\nI0324 00:14:51.382868 786 log.go:172] (0xc00093d810) (0xc000ac0960) Stream removed, broadcasting: 1\nI0324 00:14:51.382932 786 log.go:172] (0xc00093d810) Go away received\nI0324 00:14:51.383091 786 log.go:172] (0xc00093d810) (0xc000ac0960) Stream removed, broadcasting: 1\nI0324 00:14:51.383104 786 log.go:172] (0xc00093d810) (0xc0006677c0) Stream removed, broadcasting: 3\nI0324 00:14:51.383109 786 log.go:172] (0xc00093d810) (0xc000476be0) Stream removed, broadcasting: 5\n" Mar 24 00:14:51.386: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:14:51.386: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 24 00:14:51.386: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:14:51.663: INFO: stderr: "I0324 00:14:51.551755 807 log.go:172] (0xc00003bad0) (0xc0009d6000) Create stream\nI0324 00:14:51.551834 807 log.go:172] (0xc00003bad0) (0xc0009d6000) Stream added, broadcasting: 1\nI0324 00:14:51.554940 807 log.go:172] (0xc00003bad0) Reply frame received for 1\nI0324 00:14:51.554979 807 log.go:172] (0xc00003bad0) (0xc000692000) Create stream\nI0324 00:14:51.554992 807 log.go:172] (0xc00003bad0) (0xc000692000) Stream added, broadcasting: 3\nI0324 00:14:51.555806 807 log.go:172] (0xc00003bad0) Reply frame received for 3\nI0324 00:14:51.555850 807 log.go:172] (0xc00003bad0) (0xc00070d0e0) Create stream\nI0324 00:14:51.555863 807 log.go:172] (0xc00003bad0) (0xc00070d0e0) Stream added, broadcasting: 5\nI0324 00:14:51.556758 807 log.go:172] (0xc00003bad0) Reply frame received for 5\nI0324 00:14:51.622233 807 log.go:172] (0xc00003bad0) Data frame received for 5\nI0324 00:14:51.622264 807 log.go:172] (0xc00070d0e0) (5) Data frame handling\nI0324 00:14:51.622285 807 log.go:172] (0xc00070d0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:14:51.656150 807 log.go:172] (0xc00003bad0) Data frame received for 3\nI0324 00:14:51.656186 807 log.go:172] (0xc000692000) (3) Data frame handling\nI0324 00:14:51.656220 807 log.go:172] (0xc000692000) (3) Data frame sent\nI0324 00:14:51.656251 807 log.go:172] (0xc00003bad0) Data frame received for 3\nI0324 00:14:51.656296 807 log.go:172] (0xc000692000) (3) Data frame handling\nI0324 00:14:51.656664 807 log.go:172] (0xc00003bad0) Data frame received for 5\nI0324 00:14:51.656703 807 log.go:172] (0xc00070d0e0) (5) Data frame handling\nI0324 00:14:51.658358 807 log.go:172] (0xc00003bad0) Data frame received for 1\nI0324 00:14:51.658372 807 log.go:172] (0xc0009d6000) (1) Data frame handling\nI0324 00:14:51.658379 807 log.go:172] (0xc0009d6000) (1) Data frame sent\nI0324 00:14:51.658387 807 log.go:172] (0xc00003bad0) (0xc0009d6000) Stream removed, broadcasting: 1\nI0324 00:14:51.658453 807 log.go:172] (0xc00003bad0) Go away received\nI0324 00:14:51.658624 807 log.go:172] (0xc00003bad0) (0xc0009d6000) Stream removed, broadcasting: 1\nI0324 00:14:51.658646 807 log.go:172] (0xc00003bad0) (0xc000692000) Stream removed, broadcasting: 3\nI0324 00:14:51.658652 807 log.go:172] (0xc00003bad0) (0xc00070d0e0) Stream removed, broadcasting: 5\n" Mar 24 00:14:51.663: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:14:51.663: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 24 00:14:51.663: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:14:51.666: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Mar 24 00:15:01.674: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 24 00:15:01.674: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 24 00:15:01.674: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 24 00:15:01.688: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:01.688: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:01.688: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:01.689: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:01.689: INFO: Mar 24 00:15:01.689: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:02.795: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:02.795: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:02.795: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:02.796: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:02.796: INFO: Mar 24 00:15:02.796: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:03.799: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:03.799: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:03.799: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:03.799: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:03.799: INFO: Mar 24 00:15:03.800: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:04.804: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:04.804: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:04.804: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:04.804: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:04.804: INFO: Mar 24 00:15:04.804: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:05.810: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:05.810: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:05.810: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:05.810: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:05.810: INFO: Mar 24 00:15:05.810: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:06.815: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:06.815: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:06.815: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:06.815: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:06.815: INFO: Mar 24 00:15:06.815: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:07.820: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:07.820: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:07.820: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:07.820: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:07.820: INFO: Mar 24 00:15:07.820: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:08.825: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:08.825: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:08.825: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:08.825: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:08.825: INFO: Mar 24 00:15:08.825: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:09.855: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:09.855: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:09.855: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:09.855: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:09.855: INFO: Mar 24 00:15:09.855: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 24 00:15:10.859: INFO: POD NODE PHASE GRACE CONDITIONS Mar 24 00:15:10.859: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:09 +0000 UTC }] Mar 24 00:15:10.859: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:10.859: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-24 00:14:30 +0000 UTC }] Mar 24 00:15:10.859: INFO: Mar 24 00:15:10.859: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4047 Mar 24 00:15:11.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:15:11.997: INFO: rc: 1 Mar 24 00:15:11.997: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Mar 24 00:15:21.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:15:22.120: INFO: rc: 1 Mar 24 00:15:22.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:15:32.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:15:32.233: INFO: rc: 1 Mar 24 00:15:32.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:15:42.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:15:42.337: INFO: rc: 1 Mar 24 00:15:42.337: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:15:52.338: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:15:52.438: INFO: rc: 1 Mar 24 00:15:52.439: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:02.439: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:02.532: INFO: rc: 1 Mar 24 00:16:02.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:12.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:12.628: INFO: rc: 1 Mar 24 00:16:12.628: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:22.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:22.719: INFO: rc: 1 Mar 24 00:16:22.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:32.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:32.820: INFO: rc: 1 Mar 24 00:16:32.820: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:42.820: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:42.920: INFO: rc: 1 Mar 24 00:16:42.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:16:52.921: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:16:53.021: INFO: rc: 1 Mar 24 00:16:53.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:03.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:03.117: INFO: rc: 1 Mar 24 00:17:03.117: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:13.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:13.214: INFO: rc: 1 Mar 24 00:17:13.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:23.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:23.317: INFO: rc: 1 Mar 24 00:17:23.317: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:33.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:33.421: INFO: rc: 1 Mar 24 00:17:33.421: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:43.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:43.515: INFO: rc: 1 Mar 24 00:17:43.515: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:17:53.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:17:53.620: INFO: rc: 1 Mar 24 00:17:53.620: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:03.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:03.712: INFO: rc: 1 Mar 24 00:18:03.712: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:13.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:13.814: INFO: rc: 1 Mar 24 00:18:13.814: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:23.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:23.916: INFO: rc: 1 Mar 24 00:18:23.916: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:33.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:34.028: INFO: rc: 1 Mar 24 00:18:34.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:44.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:44.125: INFO: rc: 1 Mar 24 00:18:44.125: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:18:54.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:18:54.229: INFO: rc: 1 Mar 24 00:18:54.229: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:04.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:04.332: INFO: rc: 1 Mar 24 00:19:04.332: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:14.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:14.437: INFO: rc: 1 Mar 24 00:19:14.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:24.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:24.530: INFO: rc: 1 Mar 24 00:19:24.530: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:34.531: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:34.627: INFO: rc: 1 Mar 24 00:19:34.627: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:44.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:44.723: INFO: rc: 1 Mar 24 00:19:44.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:19:54.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:19:54.830: INFO: rc: 1 Mar 24 00:19:54.830: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:20:04.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:20:04.930: INFO: rc: 1 Mar 24 00:20:04.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Mar 24 00:20:14.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-4047 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:20:15.022: INFO: rc: 1 Mar 24 00:20:15.022: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Mar 24 00:20:15.022: INFO: Scaling statefulset ss to 0 Mar 24 00:20:15.030: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 24 00:20:15.033: INFO: Deleting all statefulset in ns statefulset-4047 Mar 24 00:20:15.035: INFO: Scaling statefulset ss to 0 Mar 24 00:20:15.047: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:20:15.049: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:15.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4047" for this suite. • [SLOW TEST:365.381 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":152,"skipped":2289,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:15.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:26.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3853" for this suite. • [SLOW TEST:11.280 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":153,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:26.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:20:26.411: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 24 00:20:28.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 create -f -' Mar 24 00:20:31.760: INFO: stderr: "" Mar 24 00:20:31.760: INFO: stdout: "e2e-test-crd-publish-openapi-9000-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 24 00:20:31.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 delete e2e-test-crd-publish-openapi-9000-crds test-foo' Mar 24 00:20:31.884: INFO: stderr: "" Mar 24 00:20:31.884: INFO: stdout: "e2e-test-crd-publish-openapi-9000-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 24 00:20:31.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 apply -f -' Mar 24 00:20:32.174: INFO: stderr: "" Mar 24 00:20:32.174: INFO: stdout: "e2e-test-crd-publish-openapi-9000-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 24 00:20:32.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 delete e2e-test-crd-publish-openapi-9000-crds test-foo' Mar 24 00:20:32.279: INFO: stderr: "" Mar 24 00:20:32.279: INFO: stdout: "e2e-test-crd-publish-openapi-9000-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 24 00:20:32.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 create -f -' Mar 24 00:20:32.551: INFO: rc: 1 Mar 24 00:20:32.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 apply -f -' Mar 24 00:20:32.791: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 24 00:20:32.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 create -f -' Mar 24 00:20:33.019: INFO: rc: 1 Mar 24 00:20:33.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2862 apply -f -' Mar 24 00:20:33.254: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 24 00:20:33.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9000-crds' Mar 24 00:20:33.480: INFO: stderr: "" Mar 24 00:20:33.480: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9000-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 24 00:20:33.481: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9000-crds.metadata' Mar 24 00:20:33.709: INFO: stderr: "" Mar 24 00:20:33.709: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9000-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 24 00:20:33.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9000-crds.spec' Mar 24 00:20:33.929: INFO: stderr: "" Mar 24 00:20:33.929: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9000-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 24 00:20:33.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9000-crds.spec.bars' Mar 24 00:20:34.189: INFO: stderr: "" Mar 24 00:20:34.189: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9000-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 24 00:20:34.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9000-crds.spec.bars2' Mar 24 00:20:34.413: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:37.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2862" for this suite. • [SLOW TEST:10.987 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":154,"skipped":2378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:37.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-4130/secret-test-a3568e7b-bee3-4291-8642-1781fd6a55f1 STEP: Creating a pod to test consume secrets Mar 24 00:20:37.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a" in namespace "secrets-4130" to be "Succeeded or Failed" Mar 24 00:20:37.446: INFO: Pod "pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a": Phase="Pending", Reason="", readiness=false. Elapsed: 23.524912ms Mar 24 00:20:39.450: INFO: Pod "pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02749389s Mar 24 00:20:41.455: INFO: Pod "pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032454648s STEP: Saw pod success Mar 24 00:20:41.455: INFO: Pod "pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a" satisfied condition "Succeeded or Failed" Mar 24 00:20:41.458: INFO: Trying to get logs from node latest-worker pod pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a container env-test: STEP: delete the pod Mar 24 00:20:41.505: INFO: Waiting for pod pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a to disappear Mar 24 00:20:41.515: INFO: Pod pod-configmaps-703ffdeb-a0a2-4481-94ea-bb1057d0e26a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:41.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4130" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2445,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:41.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 24 00:20:41.608: INFO: Waiting up to 5m0s for pod "var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f" in namespace "var-expansion-3680" to be "Succeeded or Failed" Mar 24 00:20:41.635: INFO: Pod "var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.572976ms Mar 24 00:20:43.640: INFO: Pod "var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031641556s Mar 24 00:20:45.644: INFO: Pod "var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035679171s STEP: Saw pod success Mar 24 00:20:45.644: INFO: Pod "var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f" satisfied condition "Succeeded or Failed" Mar 24 00:20:45.647: INFO: Trying to get logs from node latest-worker2 pod var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f container dapi-container: STEP: delete the pod Mar 24 00:20:45.685: INFO: Waiting for pod var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f to disappear Mar 24 00:20:45.689: INFO: Pod var-expansion-455b0d02-f250-4f0e-a00c-521a5e6fc04f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:45.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3680" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2448,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:45.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-abf7d826-f6be-42a5-b1e9-342bf4656583 STEP: Creating a pod to test consume configMaps Mar 24 00:20:45.769: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347" in namespace "projected-8082" to be "Succeeded or Failed" Mar 24 00:20:45.785: INFO: Pod "pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347": Phase="Pending", Reason="", readiness=false. Elapsed: 15.929443ms Mar 24 00:20:47.790: INFO: Pod "pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020322912s Mar 24 00:20:49.794: INFO: Pod "pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024757747s STEP: Saw pod success Mar 24 00:20:49.794: INFO: Pod "pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347" satisfied condition "Succeeded or Failed" Mar 24 00:20:49.797: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347 container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:20:49.817: INFO: Waiting for pod pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347 to disappear Mar 24 00:20:49.820: INFO: Pod pod-projected-configmaps-b8bc9851-cce2-495f-a392-5db33a7e4347 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:49.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8082" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2450,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:49.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:20:50.414: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:20:52.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606050, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606050, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606050, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606050, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:20:55.525: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 24 00:20:59.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-5463 to-be-attached-pod -i -c=container1' Mar 24 00:20:59.677: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:20:59.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5463" for this suite. STEP: Destroying namespace "webhook-5463-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.930 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":158,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:20:59.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6613 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6613 STEP: creating replication controller externalsvc in namespace services-6613 I0324 00:20:59.918119 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6613, replica count: 2 I0324 00:21:02.968548 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 00:21:05.968805 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 24 00:21:06.028: INFO: Creating new exec pod Mar 24 00:21:10.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-6613 execpodvghfv -- /bin/sh -x -c nslookup nodeport-service' Mar 24 00:21:10.260: INFO: stderr: "I0324 00:21:10.161900 1774 log.go:172] (0xc000a5f340) (0xc0009988c0) Create stream\nI0324 00:21:10.161970 1774 log.go:172] (0xc000a5f340) (0xc0009988c0) Stream added, broadcasting: 1\nI0324 00:21:10.166755 1774 log.go:172] (0xc000a5f340) Reply frame received for 1\nI0324 00:21:10.166813 1774 log.go:172] (0xc000a5f340) (0xc00062b720) Create stream\nI0324 00:21:10.166828 1774 log.go:172] (0xc000a5f340) (0xc00062b720) Stream added, broadcasting: 3\nI0324 00:21:10.167864 1774 log.go:172] (0xc000a5f340) Reply frame received for 3\nI0324 00:21:10.167893 1774 log.go:172] (0xc000a5f340) (0xc000452b40) Create stream\nI0324 00:21:10.167903 1774 log.go:172] (0xc000a5f340) (0xc000452b40) Stream added, broadcasting: 5\nI0324 00:21:10.168752 1774 log.go:172] (0xc000a5f340) Reply frame received for 5\nI0324 00:21:10.240892 1774 log.go:172] (0xc000a5f340) Data frame received for 5\nI0324 00:21:10.240923 1774 log.go:172] (0xc000452b40) (5) Data frame handling\nI0324 00:21:10.240946 1774 log.go:172] (0xc000452b40) (5) Data frame sent\n+ nslookup nodeport-service\nI0324 00:21:10.251267 1774 log.go:172] (0xc000a5f340) Data frame received for 3\nI0324 00:21:10.251305 1774 log.go:172] (0xc00062b720) (3) Data frame handling\nI0324 00:21:10.251334 1774 log.go:172] (0xc00062b720) (3) Data frame sent\nI0324 00:21:10.252668 1774 log.go:172] (0xc000a5f340) Data frame received for 3\nI0324 00:21:10.252691 1774 log.go:172] (0xc00062b720) (3) Data frame handling\nI0324 00:21:10.252712 1774 log.go:172] (0xc00062b720) (3) Data frame sent\nI0324 00:21:10.253330 1774 log.go:172] (0xc000a5f340) Data frame received for 3\nI0324 00:21:10.253361 1774 log.go:172] (0xc00062b720) (3) Data frame handling\nI0324 00:21:10.253568 1774 log.go:172] (0xc000a5f340) Data frame received for 5\nI0324 00:21:10.253594 1774 log.go:172] (0xc000452b40) (5) Data frame handling\nI0324 00:21:10.255533 1774 log.go:172] (0xc000a5f340) Data frame received for 1\nI0324 00:21:10.255579 1774 log.go:172] (0xc0009988c0) (1) Data frame handling\nI0324 00:21:10.255614 1774 log.go:172] (0xc0009988c0) (1) Data frame sent\nI0324 00:21:10.255636 1774 log.go:172] (0xc000a5f340) (0xc0009988c0) Stream removed, broadcasting: 1\nI0324 00:21:10.255657 1774 log.go:172] (0xc000a5f340) Go away received\nI0324 00:21:10.256279 1774 log.go:172] (0xc000a5f340) (0xc0009988c0) Stream removed, broadcasting: 1\nI0324 00:21:10.256313 1774 log.go:172] (0xc000a5f340) (0xc00062b720) Stream removed, broadcasting: 3\nI0324 00:21:10.256332 1774 log.go:172] (0xc000a5f340) (0xc000452b40) Stream removed, broadcasting: 5\n" Mar 24 00:21:10.260: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6613.svc.cluster.local\tcanonical name = externalsvc.services-6613.svc.cluster.local.\nName:\texternalsvc.services-6613.svc.cluster.local\nAddress: 10.96.24.151\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6613, will wait for the garbage collector to delete the pods Mar 24 00:21:10.320: INFO: Deleting ReplicationController externalsvc took: 6.283588ms Mar 24 00:21:10.420: INFO: Terminating ReplicationController externalsvc pods took: 100.216822ms Mar 24 00:21:23.087: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:21:23.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6613" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.353 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":159,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:21:23.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-d270b4af-9a3f-4b21-ab6d-9be0255b15db STEP: Creating a pod to test consume secrets Mar 24 00:21:23.230: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d" in namespace "projected-8076" to be "Succeeded or Failed" Mar 24 00:21:23.235: INFO: Pod "pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325737ms Mar 24 00:21:25.239: INFO: Pod "pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008261103s Mar 24 00:21:27.243: INFO: Pod "pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01247261s STEP: Saw pod success Mar 24 00:21:27.243: INFO: Pod "pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d" satisfied condition "Succeeded or Failed" Mar 24 00:21:27.246: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d container projected-secret-volume-test: STEP: delete the pod Mar 24 00:21:27.281: INFO: Waiting for pod pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d to disappear Mar 24 00:21:27.295: INFO: Pod pod-projected-secrets-1965b495-1803-4f9e-aeb9-b1f90727495d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:21:27.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8076" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2578,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:21:27.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7933 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 24 00:21:27.405: INFO: Found 0 stateful pods, waiting for 3 Mar 24 00:21:37.410: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:21:37.410: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:21:37.410: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:21:37.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:21:37.661: INFO: stderr: "I0324 00:21:37.560357 1793 log.go:172] (0xc000aea000) (0xc00082f2c0) Create stream\nI0324 00:21:37.560446 1793 log.go:172] (0xc000aea000) (0xc00082f2c0) Stream added, broadcasting: 1\nI0324 00:21:37.563537 1793 log.go:172] (0xc000aea000) Reply frame received for 1\nI0324 00:21:37.563578 1793 log.go:172] (0xc000aea000) (0xc00082f540) Create stream\nI0324 00:21:37.563586 1793 log.go:172] (0xc000aea000) (0xc00082f540) Stream added, broadcasting: 3\nI0324 00:21:37.564529 1793 log.go:172] (0xc000aea000) Reply frame received for 3\nI0324 00:21:37.564565 1793 log.go:172] (0xc000aea000) (0xc00082f5e0) Create stream\nI0324 00:21:37.564578 1793 log.go:172] (0xc000aea000) (0xc00082f5e0) Stream added, broadcasting: 5\nI0324 00:21:37.565521 1793 log.go:172] (0xc000aea000) Reply frame received for 5\nI0324 00:21:37.618051 1793 log.go:172] (0xc000aea000) Data frame received for 5\nI0324 00:21:37.618086 1793 log.go:172] (0xc00082f5e0) (5) Data frame handling\nI0324 00:21:37.618098 1793 log.go:172] (0xc00082f5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:21:37.653663 1793 log.go:172] (0xc000aea000) Data frame received for 3\nI0324 00:21:37.653709 1793 log.go:172] (0xc00082f540) (3) Data frame handling\nI0324 00:21:37.653750 1793 log.go:172] (0xc00082f540) (3) Data frame sent\nI0324 00:21:37.653983 1793 log.go:172] (0xc000aea000) Data frame received for 3\nI0324 00:21:37.654061 1793 log.go:172] (0xc00082f540) (3) Data frame handling\nI0324 00:21:37.654084 1793 log.go:172] (0xc000aea000) Data frame received for 5\nI0324 00:21:37.654095 1793 log.go:172] (0xc00082f5e0) (5) Data frame handling\nI0324 00:21:37.656111 1793 log.go:172] (0xc000aea000) Data frame received for 1\nI0324 00:21:37.656137 1793 log.go:172] (0xc00082f2c0) (1) Data frame handling\nI0324 00:21:37.656160 1793 log.go:172] (0xc00082f2c0) (1) Data frame sent\nI0324 00:21:37.656181 1793 log.go:172] (0xc000aea000) (0xc00082f2c0) Stream removed, broadcasting: 1\nI0324 00:21:37.656229 1793 log.go:172] (0xc000aea000) Go away received\nI0324 00:21:37.656680 1793 log.go:172] (0xc000aea000) (0xc00082f2c0) Stream removed, broadcasting: 1\nI0324 00:21:37.656714 1793 log.go:172] (0xc000aea000) (0xc00082f540) Stream removed, broadcasting: 3\nI0324 00:21:37.656738 1793 log.go:172] (0xc000aea000) (0xc00082f5e0) Stream removed, broadcasting: 5\n" Mar 24 00:21:37.661: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:21:37.661: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 24 00:21:47.693: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 24 00:21:57.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:21:57.915: INFO: stderr: "I0324 00:21:57.846674 1814 log.go:172] (0xc000afc000) (0xc00065d360) Create stream\nI0324 00:21:57.846733 1814 log.go:172] (0xc000afc000) (0xc00065d360) Stream added, broadcasting: 1\nI0324 00:21:57.848941 1814 log.go:172] (0xc000afc000) Reply frame received for 1\nI0324 00:21:57.848974 1814 log.go:172] (0xc000afc000) (0xc000434be0) Create stream\nI0324 00:21:57.848984 1814 log.go:172] (0xc000afc000) (0xc000434be0) Stream added, broadcasting: 3\nI0324 00:21:57.850031 1814 log.go:172] (0xc000afc000) Reply frame received for 3\nI0324 00:21:57.850067 1814 log.go:172] (0xc000afc000) (0xc000ade000) Create stream\nI0324 00:21:57.850076 1814 log.go:172] (0xc000afc000) (0xc000ade000) Stream added, broadcasting: 5\nI0324 00:21:57.850779 1814 log.go:172] (0xc000afc000) Reply frame received for 5\nI0324 00:21:57.908771 1814 log.go:172] (0xc000afc000) Data frame received for 3\nI0324 00:21:57.908815 1814 log.go:172] (0xc000434be0) (3) Data frame handling\nI0324 00:21:57.908839 1814 log.go:172] (0xc000434be0) (3) Data frame sent\nI0324 00:21:57.908853 1814 log.go:172] (0xc000afc000) Data frame received for 3\nI0324 00:21:57.908869 1814 log.go:172] (0xc000434be0) (3) Data frame handling\nI0324 00:21:57.909001 1814 log.go:172] (0xc000afc000) Data frame received for 5\nI0324 00:21:57.909020 1814 log.go:172] (0xc000ade000) (5) Data frame handling\nI0324 00:21:57.909033 1814 log.go:172] (0xc000ade000) (5) Data frame sent\nI0324 00:21:57.909042 1814 log.go:172] (0xc000afc000) Data frame received for 5\nI0324 00:21:57.909052 1814 log.go:172] (0xc000ade000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0324 00:21:57.910853 1814 log.go:172] (0xc000afc000) Data frame received for 1\nI0324 00:21:57.910890 1814 log.go:172] (0xc00065d360) (1) Data frame handling\nI0324 00:21:57.910915 1814 log.go:172] (0xc00065d360) (1) Data frame sent\nI0324 00:21:57.910943 1814 log.go:172] (0xc000afc000) (0xc00065d360) Stream removed, broadcasting: 1\nI0324 00:21:57.910977 1814 log.go:172] (0xc000afc000) Go away received\nI0324 00:21:57.911265 1814 log.go:172] (0xc000afc000) (0xc00065d360) Stream removed, broadcasting: 1\nI0324 00:21:57.911289 1814 log.go:172] (0xc000afc000) (0xc000434be0) Stream removed, broadcasting: 3\nI0324 00:21:57.911299 1814 log.go:172] (0xc000afc000) (0xc000ade000) Stream removed, broadcasting: 5\n" Mar 24 00:21:57.916: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 24 00:21:57.916: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 24 00:22:17.935: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Mar 24 00:22:17.935: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 24 00:22:27.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 24 00:22:28.200: INFO: stderr: "I0324 00:22:28.090763 1835 log.go:172] (0xc000a42000) (0xc00072b2c0) Create stream\nI0324 00:22:28.090841 1835 log.go:172] (0xc000a42000) (0xc00072b2c0) Stream added, broadcasting: 1\nI0324 00:22:28.094214 1835 log.go:172] (0xc000a42000) Reply frame received for 1\nI0324 00:22:28.094267 1835 log.go:172] (0xc000a42000) (0xc00072b4a0) Create stream\nI0324 00:22:28.094284 1835 log.go:172] (0xc000a42000) (0xc00072b4a0) Stream added, broadcasting: 3\nI0324 00:22:28.095352 1835 log.go:172] (0xc000a42000) Reply frame received for 3\nI0324 00:22:28.095411 1835 log.go:172] (0xc000a42000) (0xc00072b540) Create stream\nI0324 00:22:28.095430 1835 log.go:172] (0xc000a42000) (0xc00072b540) Stream added, broadcasting: 5\nI0324 00:22:28.096376 1835 log.go:172] (0xc000a42000) Reply frame received for 5\nI0324 00:22:28.160328 1835 log.go:172] (0xc000a42000) Data frame received for 5\nI0324 00:22:28.160355 1835 log.go:172] (0xc00072b540) (5) Data frame handling\nI0324 00:22:28.160373 1835 log.go:172] (0xc00072b540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0324 00:22:28.192870 1835 log.go:172] (0xc000a42000) Data frame received for 5\nI0324 00:22:28.192903 1835 log.go:172] (0xc00072b540) (5) Data frame handling\nI0324 00:22:28.192984 1835 log.go:172] (0xc000a42000) Data frame received for 3\nI0324 00:22:28.193031 1835 log.go:172] (0xc00072b4a0) (3) Data frame handling\nI0324 00:22:28.193050 1835 log.go:172] (0xc00072b4a0) (3) Data frame sent\nI0324 00:22:28.193063 1835 log.go:172] (0xc000a42000) Data frame received for 3\nI0324 00:22:28.193074 1835 log.go:172] (0xc00072b4a0) (3) Data frame handling\nI0324 00:22:28.195321 1835 log.go:172] (0xc000a42000) Data frame received for 1\nI0324 00:22:28.195352 1835 log.go:172] (0xc00072b2c0) (1) Data frame handling\nI0324 00:22:28.195372 1835 log.go:172] (0xc00072b2c0) (1) Data frame sent\nI0324 00:22:28.195388 1835 log.go:172] (0xc000a42000) (0xc00072b2c0) Stream removed, broadcasting: 1\nI0324 00:22:28.195472 1835 log.go:172] (0xc000a42000) Go away received\nI0324 00:22:28.195797 1835 log.go:172] (0xc000a42000) (0xc00072b2c0) Stream removed, broadcasting: 1\nI0324 00:22:28.195817 1835 log.go:172] (0xc000a42000) (0xc00072b4a0) Stream removed, broadcasting: 3\nI0324 00:22:28.195830 1835 log.go:172] (0xc000a42000) (0xc00072b540) Stream removed, broadcasting: 5\n" Mar 24 00:22:28.200: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 24 00:22:28.200: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 24 00:22:38.244: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 24 00:22:48.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 24 00:22:48.491: INFO: stderr: "I0324 00:22:48.413533 1856 log.go:172] (0xc0009df600) (0xc0009a8820) Create stream\nI0324 00:22:48.413597 1856 log.go:172] (0xc0009df600) (0xc0009a8820) Stream added, broadcasting: 1\nI0324 00:22:48.416054 1856 log.go:172] (0xc0009df600) Reply frame received for 1\nI0324 00:22:48.416097 1856 log.go:172] (0xc0009df600) (0xc0009720a0) Create stream\nI0324 00:22:48.416110 1856 log.go:172] (0xc0009df600) (0xc0009720a0) Stream added, broadcasting: 3\nI0324 00:22:48.417231 1856 log.go:172] (0xc0009df600) Reply frame received for 3\nI0324 00:22:48.417272 1856 log.go:172] (0xc0009df600) (0xc000a4e0a0) Create stream\nI0324 00:22:48.417284 1856 log.go:172] (0xc0009df600) (0xc000a4e0a0) Stream added, broadcasting: 5\nI0324 00:22:48.418487 1856 log.go:172] (0xc0009df600) Reply frame received for 5\nI0324 00:22:48.484933 1856 log.go:172] (0xc0009df600) Data frame received for 5\nI0324 00:22:48.485068 1856 log.go:172] (0xc000a4e0a0) (5) Data frame handling\nI0324 00:22:48.485094 1856 log.go:172] (0xc000a4e0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0324 00:22:48.485274 1856 log.go:172] (0xc0009df600) Data frame received for 5\nI0324 00:22:48.485301 1856 log.go:172] (0xc0009df600) Data frame received for 3\nI0324 00:22:48.485339 1856 log.go:172] (0xc0009720a0) (3) Data frame handling\nI0324 00:22:48.485363 1856 log.go:172] (0xc0009720a0) (3) Data frame sent\nI0324 00:22:48.485387 1856 log.go:172] (0xc0009df600) Data frame received for 3\nI0324 00:22:48.485404 1856 log.go:172] (0xc0009720a0) (3) Data frame handling\nI0324 00:22:48.485431 1856 log.go:172] (0xc000a4e0a0) (5) Data frame handling\nI0324 00:22:48.487187 1856 log.go:172] (0xc0009df600) Data frame received for 1\nI0324 00:22:48.487216 1856 log.go:172] (0xc0009a8820) (1) Data frame handling\nI0324 00:22:48.487235 1856 log.go:172] (0xc0009a8820) (1) Data frame sent\nI0324 00:22:48.487250 1856 log.go:172] (0xc0009df600) (0xc0009a8820) Stream removed, broadcasting: 1\nI0324 00:22:48.487288 1856 log.go:172] (0xc0009df600) Go away received\nI0324 00:22:48.487597 1856 log.go:172] (0xc0009df600) (0xc0009a8820) Stream removed, broadcasting: 1\nI0324 00:22:48.487616 1856 log.go:172] (0xc0009df600) (0xc0009720a0) Stream removed, broadcasting: 3\nI0324 00:22:48.487627 1856 log.go:172] (0xc0009df600) (0xc000a4e0a0) Stream removed, broadcasting: 5\n" Mar 24 00:22:48.491: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 24 00:22:48.491: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 24 00:23:08.513: INFO: Deleting all statefulset in ns statefulset-7933 Mar 24 00:23:08.515: INFO: Scaling statefulset ss2 to 0 Mar 24 00:23:18.542: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:23:18.545: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:18.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7933" for this suite. • [SLOW TEST:111.261 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":161,"skipped":2594,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:18.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 24 00:23:25.098: INFO: 0 pods remaining Mar 24 00:23:25.098: INFO: 0 pods has nil DeletionTimestamp Mar 24 00:23:25.098: INFO: STEP: Gathering metrics W0324 00:23:26.481834 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 00:23:26.481: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:26.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8504" for this suite. • [SLOW TEST:8.239 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":162,"skipped":2701,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:26.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:23:27.392: INFO: Waiting up to 5m0s for pod "downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f" in namespace "downward-api-6206" to be "Succeeded or Failed" Mar 24 00:23:27.483: INFO: Pod "downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 91.297874ms Mar 24 00:23:29.488: INFO: Pod "downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095380215s Mar 24 00:23:31.492: INFO: Pod "downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099882013s STEP: Saw pod success Mar 24 00:23:31.492: INFO: Pod "downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f" satisfied condition "Succeeded or Failed" Mar 24 00:23:31.495: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f container client-container: STEP: delete the pod Mar 24 00:23:31.569: INFO: Waiting for pod downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f to disappear Mar 24 00:23:31.573: INFO: Pod downwardapi-volume-021236a8-6357-4f53-a840-f4cd2ae64f3f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:31.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6206" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:31.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 24 00:23:31.647: INFO: Waiting up to 5m0s for pod "client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c" in namespace "containers-2833" to be "Succeeded or Failed" Mar 24 00:23:31.651: INFO: Pod "client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.235292ms Mar 24 00:23:33.654: INFO: Pod "client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006992789s Mar 24 00:23:35.659: INFO: Pod "client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011183604s STEP: Saw pod success Mar 24 00:23:35.659: INFO: Pod "client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c" satisfied condition "Succeeded or Failed" Mar 24 00:23:35.662: INFO: Trying to get logs from node latest-worker pod client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c container test-container: STEP: delete the pod Mar 24 00:23:35.685: INFO: Waiting for pod client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c to disappear Mar 24 00:23:35.699: INFO: Pod client-containers-aa8d411d-7f37-488b-90f7-b9ba5a18a52c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:35.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2833" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2735,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-551174c4-844a-4d57-bc42-1572a25b9b72 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-551174c4-844a-4d57-bc42-1572a25b9b72 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:41.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3308" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2742,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:41.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 24 00:23:46.558: INFO: Successfully updated pod "labelsupdatee8b278d7-334a-41ae-8977-5c4500d1c18c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:48.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3330" for this suite. • [SLOW TEST:6.683 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2757,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:48.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:48.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-687" for this suite. STEP: Destroying namespace "nspatchtest-289f91c9-5a5d-4fed-980d-a86156c2e773-2079" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":167,"skipped":2773,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:48.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 24 00:23:49.430: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 24 00:23:51.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606229, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606229, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606229, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:23:54.631: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:23:54.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:23:55.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1733" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.246 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":168,"skipped":2777,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:23:56.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:12.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-295" for this suite. • [SLOW TEST:16.249 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":169,"skipped":2789,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:12.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:24:12.946: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:24:14.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606252, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606252, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606253, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606252, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:24:18.001: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:18.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5521" for this suite. STEP: Destroying namespace "webhook-5521-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.936 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":170,"skipped":2801,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:18.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:24:18.276: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f" in namespace "downward-api-2863" to be "Succeeded or Failed" Mar 24 00:24:18.295: INFO: Pod "downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.267393ms Mar 24 00:24:20.299: INFO: Pod "downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023050325s Mar 24 00:24:22.304: INFO: Pod "downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027185252s STEP: Saw pod success Mar 24 00:24:22.304: INFO: Pod "downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f" satisfied condition "Succeeded or Failed" Mar 24 00:24:22.307: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f container client-container: STEP: delete the pod Mar 24 00:24:22.339: INFO: Waiting for pod downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f to disappear Mar 24 00:24:22.353: INFO: Pod downwardapi-volume-c7203018-cab9-4be6-b08a-3ae795eed50f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:22.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2863" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2811,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:22.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 24 00:24:22.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9242' Mar 24 00:24:22.726: INFO: stderr: "" Mar 24 00:24:22.726: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 00:24:22.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9242' Mar 24 00:24:22.855: INFO: stderr: "" Mar 24 00:24:22.855: INFO: stdout: "update-demo-nautilus-dzh9l update-demo-nautilus-mv49k " Mar 24 00:24:22.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dzh9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9242' Mar 24 00:24:22.960: INFO: stderr: "" Mar 24 00:24:22.960: INFO: stdout: "" Mar 24 00:24:22.960: INFO: update-demo-nautilus-dzh9l is created but not running Mar 24 00:24:27.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9242' Mar 24 00:24:28.056: INFO: stderr: "" Mar 24 00:24:28.056: INFO: stdout: "update-demo-nautilus-dzh9l update-demo-nautilus-mv49k " Mar 24 00:24:28.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dzh9l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9242' Mar 24 00:24:28.146: INFO: stderr: "" Mar 24 00:24:28.146: INFO: stdout: "true" Mar 24 00:24:28.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dzh9l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9242' Mar 24 00:24:28.235: INFO: stderr: "" Mar 24 00:24:28.235: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:24:28.235: INFO: validating pod update-demo-nautilus-dzh9l Mar 24 00:24:28.239: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:24:28.240: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:24:28.240: INFO: update-demo-nautilus-dzh9l is verified up and running Mar 24 00:24:28.240: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mv49k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9242' Mar 24 00:24:28.336: INFO: stderr: "" Mar 24 00:24:28.336: INFO: stdout: "true" Mar 24 00:24:28.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mv49k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9242' Mar 24 00:24:28.427: INFO: stderr: "" Mar 24 00:24:28.427: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:24:28.427: INFO: validating pod update-demo-nautilus-mv49k Mar 24 00:24:28.431: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:24:28.431: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:24:28.431: INFO: update-demo-nautilus-mv49k is verified up and running STEP: using delete to clean up resources Mar 24 00:24:28.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9242' Mar 24 00:24:28.525: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:24:28.525: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 24 00:24:28.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9242' Mar 24 00:24:28.964: INFO: stderr: "No resources found in kubectl-9242 namespace.\n" Mar 24 00:24:28.964: INFO: stdout: "" Mar 24 00:24:28.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9242 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 00:24:29.103: INFO: stderr: "" Mar 24 00:24:29.103: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:29.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9242" for this suite. • [SLOW TEST:6.749 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":172,"skipped":2816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:29.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-qhqd STEP: Creating a pod to test atomic-volume-subpath Mar 24 00:24:29.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qhqd" in namespace "subpath-3024" to be "Succeeded or Failed" Mar 24 00:24:29.216: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.878996ms Mar 24 00:24:31.219: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006249475s Mar 24 00:24:33.223: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 4.0104374s Mar 24 00:24:35.228: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 6.0146177s Mar 24 00:24:37.246: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 8.033134898s Mar 24 00:24:39.250: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 10.037256172s Mar 24 00:24:41.276: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 12.062690856s Mar 24 00:24:43.282: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 14.068774095s Mar 24 00:24:45.285: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 16.072322099s Mar 24 00:24:47.298: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 18.085235881s Mar 24 00:24:49.302: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 20.08867248s Mar 24 00:24:51.306: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Running", Reason="", readiness=true. Elapsed: 22.092588983s Mar 24 00:24:53.310: INFO: Pod "pod-subpath-test-downwardapi-qhqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096797583s STEP: Saw pod success Mar 24 00:24:53.310: INFO: Pod "pod-subpath-test-downwardapi-qhqd" satisfied condition "Succeeded or Failed" Mar 24 00:24:53.313: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-qhqd container test-container-subpath-downwardapi-qhqd: STEP: delete the pod Mar 24 00:24:53.373: INFO: Waiting for pod pod-subpath-test-downwardapi-qhqd to disappear Mar 24 00:24:53.377: INFO: Pod pod-subpath-test-downwardapi-qhqd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-qhqd Mar 24 00:24:53.377: INFO: Deleting pod "pod-subpath-test-downwardapi-qhqd" in namespace "subpath-3024" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:53.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3024" for this suite. • [SLOW TEST:24.275 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":173,"skipped":2901,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:53.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:24:53.431: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:24:57.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7545" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2922,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:24:57.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-c687236a-9075-41bc-ad0c-c51d245e0c5d STEP: Creating a pod to test consume secrets Mar 24 00:24:57.596: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340" in namespace "projected-4253" to be "Succeeded or Failed" Mar 24 00:24:57.614: INFO: Pod "pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340": Phase="Pending", Reason="", readiness=false. Elapsed: 17.630306ms Mar 24 00:24:59.618: INFO: Pod "pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021874306s Mar 24 00:25:01.622: INFO: Pod "pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026159838s STEP: Saw pod success Mar 24 00:25:01.622: INFO: Pod "pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340" satisfied condition "Succeeded or Failed" Mar 24 00:25:01.625: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340 container projected-secret-volume-test: STEP: delete the pod Mar 24 00:25:01.656: INFO: Waiting for pod pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340 to disappear Mar 24 00:25:01.666: INFO: Pod pod-projected-secrets-472a1864-fa82-4638-a8d8-f0146c7c1340 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:01.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4253" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2923,"failed":0} SSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:01.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 24 00:25:01.751: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:12.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-594" for this suite. • [SLOW TEST:11.061 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2926,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:12.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 24 00:25:12.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-8036 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 24 00:25:12.953: INFO: stderr: "" Mar 24 00:25:12.953: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 24 00:25:12.953: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 24 00:25:12.953: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8036" to be "running and ready, or succeeded" Mar 24 00:25:12.958: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.274225ms Mar 24 00:25:14.962: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008266923s Mar 24 00:25:16.966: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.012333978s Mar 24 00:25:16.966: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 24 00:25:16.966: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 24 00:25:16.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036' Mar 24 00:25:17.080: INFO: stderr: "" Mar 24 00:25:17.080: INFO: stdout: "I0324 00:25:15.313488 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vp8z 368\nI0324 00:25:15.513784 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/tzz 217\nI0324 00:25:15.713692 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/4b5q 429\nI0324 00:25:15.913677 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jkb9 434\nI0324 00:25:16.113670 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/crjh 367\nI0324 00:25:16.313634 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ksf 308\nI0324 00:25:16.513692 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/8lw 321\nI0324 00:25:16.713668 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/cs7 440\nI0324 00:25:16.913683 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/qjp 253\n" STEP: limiting log lines Mar 24 00:25:17.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036 --tail=1' Mar 24 00:25:17.176: INFO: stderr: "" Mar 24 00:25:17.176: INFO: stdout: "I0324 00:25:17.113733 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/glmp 212\n" Mar 24 00:25:17.176: INFO: got output "I0324 00:25:17.113733 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/glmp 212\n" STEP: limiting log bytes Mar 24 00:25:17.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036 --limit-bytes=1' Mar 24 00:25:17.273: INFO: stderr: "" Mar 24 00:25:17.273: INFO: stdout: "I" Mar 24 00:25:17.273: INFO: got output "I" STEP: exposing timestamps Mar 24 00:25:17.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036 --tail=1 --timestamps' Mar 24 00:25:17.371: INFO: stderr: "" Mar 24 00:25:17.371: INFO: stdout: "2020-03-24T00:25:17.313828819Z I0324 00:25:17.313657 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tr9b 458\n" Mar 24 00:25:17.371: INFO: got output "2020-03-24T00:25:17.313828819Z I0324 00:25:17.313657 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tr9b 458\n" STEP: restricting to a time range Mar 24 00:25:19.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036 --since=1s' Mar 24 00:25:19.997: INFO: stderr: "" Mar 24 00:25:19.997: INFO: stdout: "I0324 00:25:19.113648 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/d2p 402\nI0324 00:25:19.313704 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/zjh 567\nI0324 00:25:19.513701 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/r2z 219\nI0324 00:25:19.713680 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/d4b 553\nI0324 00:25:19.913690 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/pzj 415\n" Mar 24 00:25:19.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8036 --since=24h' Mar 24 00:25:20.114: INFO: stderr: "" Mar 24 00:25:20.114: INFO: stdout: "I0324 00:25:15.313488 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vp8z 368\nI0324 00:25:15.513784 1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/tzz 217\nI0324 00:25:15.713692 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/4b5q 429\nI0324 00:25:15.913677 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/jkb9 434\nI0324 00:25:16.113670 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/crjh 367\nI0324 00:25:16.313634 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/ksf 308\nI0324 00:25:16.513692 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/8lw 321\nI0324 00:25:16.713668 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/cs7 440\nI0324 00:25:16.913683 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/qjp 253\nI0324 00:25:17.113733 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/glmp 212\nI0324 00:25:17.313657 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tr9b 458\nI0324 00:25:17.513654 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/n8c 252\nI0324 00:25:17.713748 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/pkz 255\nI0324 00:25:17.913773 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/sns8 249\nI0324 00:25:18.113684 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/j29l 228\nI0324 00:25:18.313798 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/vzqf 401\nI0324 00:25:18.513667 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/nsn 301\nI0324 00:25:18.713706 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/tt8 424\nI0324 00:25:18.913696 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/mx5t 414\nI0324 00:25:19.113648 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/d2p 402\nI0324 00:25:19.313704 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/zjh 567\nI0324 00:25:19.513701 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/r2z 219\nI0324 00:25:19.713680 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/d4b 553\nI0324 00:25:19.913690 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/pzj 415\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 24 00:25:20.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8036' Mar 24 00:25:32.772: INFO: stderr: "" Mar 24 00:25:32.772: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:32.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8036" for this suite. • [SLOW TEST:20.355 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":177,"skipped":2934,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:33.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:25:33.180: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 24 00:25:35.222: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:36.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8087" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":178,"skipped":2936,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:36.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:43.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2708" for this suite. • [SLOW TEST:7.223 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":179,"skipped":2939,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:43.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 24 00:25:43.584: INFO: Waiting up to 5m0s for pod "pod-7c3e7b6c-07d0-423b-83aa-a65543900079" in namespace "emptydir-1253" to be "Succeeded or Failed" Mar 24 00:25:43.594: INFO: Pod "pod-7c3e7b6c-07d0-423b-83aa-a65543900079": Phase="Pending", Reason="", readiness=false. Elapsed: 10.272404ms Mar 24 00:25:45.600: INFO: Pod "pod-7c3e7b6c-07d0-423b-83aa-a65543900079": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016433922s Mar 24 00:25:47.605: INFO: Pod "pod-7c3e7b6c-07d0-423b-83aa-a65543900079": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021127772s STEP: Saw pod success Mar 24 00:25:47.605: INFO: Pod "pod-7c3e7b6c-07d0-423b-83aa-a65543900079" satisfied condition "Succeeded or Failed" Mar 24 00:25:47.608: INFO: Trying to get logs from node latest-worker pod pod-7c3e7b6c-07d0-423b-83aa-a65543900079 container test-container: STEP: delete the pod Mar 24 00:25:47.632: INFO: Waiting for pod pod-7c3e7b6c-07d0-423b-83aa-a65543900079 to disappear Mar 24 00:25:47.636: INFO: Pod pod-7c3e7b6c-07d0-423b-83aa-a65543900079 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:25:47.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1253" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":180,"skipped":2944,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:25:47.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1218 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-1218 Mar 24 00:25:47.737: INFO: Found 0 stateful pods, waiting for 1 Mar 24 00:25:57.742: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 24 00:25:57.763: INFO: Deleting all statefulset in ns statefulset-1218 Mar 24 00:25:57.769: INFO: Scaling statefulset ss to 0 Mar 24 00:26:27.815: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:26:27.818: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:26:27.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1218" for this suite. • [SLOW TEST:40.194 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":181,"skipped":2945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:26:27.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-b34ab28e-1697-4e70-abc5-0ef674b10271 in namespace container-probe-981 Mar 24 00:26:31.916: INFO: Started pod test-webserver-b34ab28e-1697-4e70-abc5-0ef674b10271 in namespace container-probe-981 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 00:26:31.919: INFO: Initial restart count of pod test-webserver-b34ab28e-1697-4e70-abc5-0ef674b10271 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:30:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-981" for this suite. • [SLOW TEST:244.806 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":2975,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:30:32.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:30:33.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:30:35.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606633, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606633, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606633, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606633, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:30:38.617: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:30:38.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7353-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:30:39.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3731" for this suite. STEP: Destroying namespace "webhook-3731-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.338 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":183,"skipped":2975,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:30:39.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:30:40.071: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 24 00:30:40.081: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:40.086: INFO: Number of nodes with available pods: 0 Mar 24 00:30:40.086: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:30:41.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:41.095: INFO: Number of nodes with available pods: 0 Mar 24 00:30:41.095: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:30:42.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:42.094: INFO: Number of nodes with available pods: 0 Mar 24 00:30:42.094: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:30:43.091: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:43.107: INFO: Number of nodes with available pods: 2 Mar 24 00:30:43.107: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 24 00:30:43.135: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:43.135: INFO: Wrong image for pod: daemon-set-sqscn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:43.156: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:44.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:44.160: INFO: Wrong image for pod: daemon-set-sqscn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:44.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:45.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:45.160: INFO: Wrong image for pod: daemon-set-sqscn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:45.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:46.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:46.161: INFO: Wrong image for pod: daemon-set-sqscn. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:46.161: INFO: Pod daemon-set-sqscn is not available Mar 24 00:30:46.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:47.160: INFO: Pod daemon-set-cnnd9 is not available Mar 24 00:30:47.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:47.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:48.160: INFO: Pod daemon-set-cnnd9 is not available Mar 24 00:30:48.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:48.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:49.161: INFO: Pod daemon-set-cnnd9 is not available Mar 24 00:30:49.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:49.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:50.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:50.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:51.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:51.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:52.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:52.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:52.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:53.163: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:53.163: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:53.167: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:54.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:54.160: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:54.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:55.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:55.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:55.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:56.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:56.160: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:56.163: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:57.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:57.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:57.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:58.162: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:58.162: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:58.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:30:59.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:30:59.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:30:59.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:00.160: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:31:00.160: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:31:00.162: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:01.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:31:01.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:31:01.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:02.161: INFO: Wrong image for pod: daemon-set-cptp2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 24 00:31:02.161: INFO: Pod daemon-set-cptp2 is not available Mar 24 00:31:02.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:03.161: INFO: Pod daemon-set-mm7p5 is not available Mar 24 00:31:03.166: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 24 00:31:03.169: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:03.172: INFO: Number of nodes with available pods: 1 Mar 24 00:31:03.172: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:31:04.178: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:04.183: INFO: Number of nodes with available pods: 1 Mar 24 00:31:04.183: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:31:05.177: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:05.180: INFO: Number of nodes with available pods: 1 Mar 24 00:31:05.180: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:31:06.176: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:31:06.179: INFO: Number of nodes with available pods: 2 Mar 24 00:31:06.179: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9329, will wait for the garbage collector to delete the pods Mar 24 00:31:06.261: INFO: Deleting DaemonSet.extensions daemon-set took: 8.10307ms Mar 24 00:31:06.562: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.336821ms Mar 24 00:31:13.066: INFO: Number of nodes with available pods: 0 Mar 24 00:31:13.066: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 00:31:13.069: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9329/daemonsets","resourceVersion":"2286156"},"items":null} Mar 24 00:31:13.071: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9329/pods","resourceVersion":"2286156"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:31:13.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9329" for this suite. • [SLOW TEST:33.106 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":184,"skipped":2991,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:31:13.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-6f22bc3d-a4ee-4463-a778-f93635b31374 STEP: Creating a pod to test consume secrets Mar 24 00:31:13.179: INFO: Waiting up to 5m0s for pod "pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b" in namespace "secrets-2386" to be "Succeeded or Failed" Mar 24 00:31:13.183: INFO: Pod "pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.721516ms Mar 24 00:31:15.187: INFO: Pod "pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007568344s Mar 24 00:31:17.190: INFO: Pod "pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010897602s STEP: Saw pod success Mar 24 00:31:17.190: INFO: Pod "pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b" satisfied condition "Succeeded or Failed" Mar 24 00:31:17.192: INFO: Trying to get logs from node latest-worker pod pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b container secret-volume-test: STEP: delete the pod Mar 24 00:31:17.236: INFO: Waiting for pod pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b to disappear Mar 24 00:31:17.262: INFO: Pod pod-secrets-bf0ccac5-d669-4ed2-bd6b-5ff3e1fa0f7b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:31:17.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2386" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3027,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:31:17.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0324 00:31:57.617468 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 00:31:57.617: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:31:57.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6578" for this suite. • [SLOW TEST:40.354 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":186,"skipped":3040,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:31:57.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:31:57.685: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f450238a-0a5d-434e-b606-798e45fa0b04" in namespace "security-context-test-2131" to be "Succeeded or Failed" Mar 24 00:31:57.698: INFO: Pod "busybox-readonly-false-f450238a-0a5d-434e-b606-798e45fa0b04": Phase="Pending", Reason="", readiness=false. Elapsed: 12.845494ms Mar 24 00:31:59.713: INFO: Pod "busybox-readonly-false-f450238a-0a5d-434e-b606-798e45fa0b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027406239s Mar 24 00:32:01.716: INFO: Pod "busybox-readonly-false-f450238a-0a5d-434e-b606-798e45fa0b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031207451s Mar 24 00:32:01.716: INFO: Pod "busybox-readonly-false-f450238a-0a5d-434e-b606-798e45fa0b04" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:32:01.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2131" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3062,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:32:01.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0324 00:32:12.674953 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 24 00:32:12.675: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:32:12.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-899" for this suite. • [SLOW TEST:10.956 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":188,"skipped":3062,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:32:12.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 24 00:32:12.722: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:32:26.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2721" for this suite. • [SLOW TEST:14.194 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":189,"skipped":3097,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:32:26.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4605 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4605 STEP: creating replication controller externalsvc in namespace services-4605 I0324 00:32:27.042186 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4605, replica count: 2 I0324 00:32:30.092762 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 00:32:33.093065 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 24 00:32:33.150: INFO: Creating new exec pod Mar 24 00:32:37.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4605 execpodvksv9 -- /bin/sh -x -c nslookup clusterip-service' Mar 24 00:32:40.208: INFO: stderr: "I0324 00:32:40.086875 2273 log.go:172] (0xc000656000) (0xc000658000) Create stream\nI0324 00:32:40.086915 2273 log.go:172] (0xc000656000) (0xc000658000) Stream added, broadcasting: 1\nI0324 00:32:40.090029 2273 log.go:172] (0xc000656000) Reply frame received for 1\nI0324 00:32:40.090079 2273 log.go:172] (0xc000656000) (0xc0006c8000) Create stream\nI0324 00:32:40.090099 2273 log.go:172] (0xc000656000) (0xc0006c8000) Stream added, broadcasting: 3\nI0324 00:32:40.091182 2273 log.go:172] (0xc000656000) Reply frame received for 3\nI0324 00:32:40.091211 2273 log.go:172] (0xc000656000) (0xc0006c80a0) Create stream\nI0324 00:32:40.091225 2273 log.go:172] (0xc000656000) (0xc0006c80a0) Stream added, broadcasting: 5\nI0324 00:32:40.092369 2273 log.go:172] (0xc000656000) Reply frame received for 5\nI0324 00:32:40.188674 2273 log.go:172] (0xc000656000) Data frame received for 5\nI0324 00:32:40.188705 2273 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0324 00:32:40.188728 2273 log.go:172] (0xc0006c80a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0324 00:32:40.198268 2273 log.go:172] (0xc000656000) Data frame received for 3\nI0324 00:32:40.198300 2273 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0324 00:32:40.198325 2273 log.go:172] (0xc0006c8000) (3) Data frame sent\nI0324 00:32:40.199456 2273 log.go:172] (0xc000656000) Data frame received for 3\nI0324 00:32:40.199479 2273 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0324 00:32:40.199506 2273 log.go:172] (0xc0006c8000) (3) Data frame sent\nI0324 00:32:40.199943 2273 log.go:172] (0xc000656000) Data frame received for 3\nI0324 00:32:40.199982 2273 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0324 00:32:40.200146 2273 log.go:172] (0xc000656000) Data frame received for 5\nI0324 00:32:40.200164 2273 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0324 00:32:40.201826 2273 log.go:172] (0xc000656000) Data frame received for 1\nI0324 00:32:40.201849 2273 log.go:172] (0xc000658000) (1) Data frame handling\nI0324 00:32:40.201861 2273 log.go:172] (0xc000658000) (1) Data frame sent\nI0324 00:32:40.201874 2273 log.go:172] (0xc000656000) (0xc000658000) Stream removed, broadcasting: 1\nI0324 00:32:40.201893 2273 log.go:172] (0xc000656000) Go away received\nI0324 00:32:40.202381 2273 log.go:172] (0xc000656000) (0xc000658000) Stream removed, broadcasting: 1\nI0324 00:32:40.202411 2273 log.go:172] (0xc000656000) (0xc0006c8000) Stream removed, broadcasting: 3\nI0324 00:32:40.202425 2273 log.go:172] (0xc000656000) (0xc0006c80a0) Stream removed, broadcasting: 5\n" Mar 24 00:32:40.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4605.svc.cluster.local\tcanonical name = externalsvc.services-4605.svc.cluster.local.\nName:\texternalsvc.services-4605.svc.cluster.local\nAddress: 10.96.163.242\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4605, will wait for the garbage collector to delete the pods Mar 24 00:32:40.266: INFO: Deleting ReplicationController externalsvc took: 4.95383ms Mar 24 00:32:51.866: INFO: Terminating ReplicationController externalsvc pods took: 11.6002638s Mar 24 00:33:03.096: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4605" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:36.282 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":190,"skipped":3100,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:03.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:33:04.055: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:33:06.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606784, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606784, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606784, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606784, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:33:09.217: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:09.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6641" for this suite. STEP: Destroying namespace "webhook-6641-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.661 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":191,"skipped":3121,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:09.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:33:09.928: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 24 00:33:12.826: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8437 create -f -' Mar 24 00:33:16.050: INFO: stderr: "" Mar 24 00:33:16.050: INFO: stdout: "e2e-test-crd-publish-openapi-340-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 24 00:33:16.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8437 delete e2e-test-crd-publish-openapi-340-crds test-cr' Mar 24 00:33:16.150: INFO: stderr: "" Mar 24 00:33:16.150: INFO: stdout: "e2e-test-crd-publish-openapi-340-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 24 00:33:16.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8437 apply -f -' Mar 24 00:33:16.389: INFO: stderr: "" Mar 24 00:33:16.389: INFO: stdout: "e2e-test-crd-publish-openapi-340-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 24 00:33:16.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8437 delete e2e-test-crd-publish-openapi-340-crds test-cr' Mar 24 00:33:16.490: INFO: stderr: "" Mar 24 00:33:16.490: INFO: stdout: "e2e-test-crd-publish-openapi-340-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 24 00:33:16.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-340-crds' Mar 24 00:33:16.735: INFO: stderr: "" Mar 24 00:33:16.735: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-340-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:18.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8437" for this suite. • [SLOW TEST:8.802 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":192,"skipped":3135,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:18.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-2gvp STEP: Creating a pod to test atomic-volume-subpath Mar 24 00:33:18.685: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2gvp" in namespace "subpath-8272" to be "Succeeded or Failed" Mar 24 00:33:18.720: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.346026ms Mar 24 00:33:20.724: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038852416s Mar 24 00:33:22.728: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 4.043060056s Mar 24 00:33:24.733: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 6.047447645s Mar 24 00:33:26.736: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 8.050978566s Mar 24 00:33:28.741: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 10.055223682s Mar 24 00:33:30.744: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 12.059028249s Mar 24 00:33:32.749: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 14.063767938s Mar 24 00:33:34.753: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 16.068028938s Mar 24 00:33:36.757: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 18.072175739s Mar 24 00:33:38.762: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 20.076333197s Mar 24 00:33:40.766: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Running", Reason="", readiness=true. Elapsed: 22.080504204s Mar 24 00:33:42.770: INFO: Pod "pod-subpath-test-projected-2gvp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084925418s STEP: Saw pod success Mar 24 00:33:42.770: INFO: Pod "pod-subpath-test-projected-2gvp" satisfied condition "Succeeded or Failed" Mar 24 00:33:42.773: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-2gvp container test-container-subpath-projected-2gvp: STEP: delete the pod Mar 24 00:33:42.827: INFO: Waiting for pod pod-subpath-test-projected-2gvp to disappear Mar 24 00:33:42.847: INFO: Pod pod-subpath-test-projected-2gvp no longer exists STEP: Deleting pod pod-subpath-test-projected-2gvp Mar 24 00:33:42.847: INFO: Deleting pod "pod-subpath-test-projected-2gvp" in namespace "subpath-8272" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:42.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8272" for this suite. • [SLOW TEST:24.235 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":193,"skipped":3146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:42.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-4ae21202-cd7d-4336-a233-c5394b1dfc93 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:42.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7717" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":194,"skipped":3175,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:42.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:33:43.713: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:33:45.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606823, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606823, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720606823, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:33:48.739: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:48.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7100" for this suite. STEP: Destroying namespace "webhook-7100-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.023 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":195,"skipped":3178,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:48.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:33:48.991: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8" in namespace "security-context-test-8041" to be "Succeeded or Failed" Mar 24 00:33:48.995: INFO: Pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.99408ms Mar 24 00:33:51.003: INFO: Pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011813688s Mar 24 00:33:53.007: INFO: Pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015062148s Mar 24 00:33:53.007: INFO: Pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8" satisfied condition "Succeeded or Failed" Mar 24 00:33:53.012: INFO: Got logs for pod "busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:53.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8041" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3268,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:53.021: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 24 00:33:53.072: INFO: Waiting up to 5m0s for pod "downward-api-52fec12c-255a-43f4-a149-520c4de3473d" in namespace "downward-api-5420" to be "Succeeded or Failed" Mar 24 00:33:53.086: INFO: Pod "downward-api-52fec12c-255a-43f4-a149-520c4de3473d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.200586ms Mar 24 00:33:55.091: INFO: Pod "downward-api-52fec12c-255a-43f4-a149-520c4de3473d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019491014s Mar 24 00:33:57.096: INFO: Pod "downward-api-52fec12c-255a-43f4-a149-520c4de3473d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024043265s STEP: Saw pod success Mar 24 00:33:57.096: INFO: Pod "downward-api-52fec12c-255a-43f4-a149-520c4de3473d" satisfied condition "Succeeded or Failed" Mar 24 00:33:57.099: INFO: Trying to get logs from node latest-worker2 pod downward-api-52fec12c-255a-43f4-a149-520c4de3473d container dapi-container: STEP: delete the pod Mar 24 00:33:57.118: INFO: Waiting for pod downward-api-52fec12c-255a-43f4-a149-520c4de3473d to disappear Mar 24 00:33:57.123: INFO: Pod downward-api-52fec12c-255a-43f4-a149-520c4de3473d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:57.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5420" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:57.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:33:57.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-733" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":198,"skipped":3325,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:33:57.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 24 00:33:57.310: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 00:33:57.320: INFO: Waiting for terminating namespaces to be deleted... Mar 24 00:33:57.322: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 24 00:33:57.336: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.336: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:33:57.336: INFO: pod-qos-class-12a4faaa-b152-4b20-81a5-0a1171f081ee from pods-733 started at 2020-03-24 00:33:57 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.336: INFO: Container agnhost ready: false, restart count 0 Mar 24 00:33:57.336: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.336: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 00:33:57.336: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 24 00:33:57.341: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.341: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:33:57.341: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.341: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 00:33:57.341: INFO: busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8 from security-context-test-8041 started at 2020-03-24 00:33:49 +0000 UTC (1 container statuses recorded) Mar 24 00:33:57.341: INFO: Container busybox-privileged-false-f7a4b60c-9407-4a58-b80a-47bbf24d8eb8 ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 24 00:33:57.442: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Mar 24 00:33:57.442: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Mar 24 00:33:57.442: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Mar 24 00:33:57.442: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker Mar 24 00:33:57.442: INFO: Pod pod-qos-class-12a4faaa-b152-4b20-81a5-0a1171f081ee requesting resource cpu=100m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 24 00:33:57.442: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 Mar 24 00:33:57.447: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8.15ff163b38738d87], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4746/filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8.15ff163b94bc5797], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8.15ff163bf3d4d5ca], Reason = [Created], Message = [Created container filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8] STEP: Considering event: Type = [Normal], Name = [filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8.15ff163c0bee39c4], Reason = [Started], Message = [Started container filler-pod-07aac8c9-05a1-4dba-b5d8-b8d7a218b8f8] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2.15ff163b3a3d5f0e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4746/filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2.15ff163be272128c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2.15ff163c06e59f37], Reason = [Created], Message = [Created container filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2.15ff163c14da56e7], Reason = [Started], Message = [Started container filler-pod-2b9568ae-a84f-42cf-8fb8-4bf8d20194d2] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ff163ca0f476f6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.15ff163ca42a70ab], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:04.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4746" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.399 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":199,"skipped":3325,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:04.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 00:34:07.751: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:07.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6131" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3329,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:07.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:07.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8189" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":201,"skipped":3337,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:07.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:34:07.928: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3061' Mar 24 00:34:08.253: INFO: stderr: "" Mar 24 00:34:08.253: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 24 00:34:08.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3061' Mar 24 00:34:08.515: INFO: stderr: "" Mar 24 00:34:08.515: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 24 00:34:09.519: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:34:09.519: INFO: Found 0 / 1 Mar 24 00:34:10.536: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:34:10.536: INFO: Found 0 / 1 Mar 24 00:34:11.673: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:34:11.673: INFO: Found 0 / 1 Mar 24 00:34:12.520: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:34:12.520: INFO: Found 1 / 1 Mar 24 00:34:12.520: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 24 00:34:12.523: INFO: Selector matched 1 pods for map[app:agnhost] Mar 24 00:34:12.523: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 24 00:34:12.523: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-jjg7j --namespace=kubectl-3061' Mar 24 00:34:12.647: INFO: stderr: "" Mar 24 00:34:12.647: INFO: stdout: "Name: agnhost-master-jjg7j\nNamespace: kubectl-3061\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Tue, 24 Mar 2020 00:34:08 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.252\nIPs:\n IP: 10.244.2.252\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2918c325e9fef5c3c18970c6f68a1e37c4bf8744d2d55ffc425881c28b99dd92\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 24 Mar 2020 00:34:10 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tc2qr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tc2qr:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tc2qr\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-3061/agnhost-master-jjg7j to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Mar 24 00:34:12.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3061' Mar 24 00:34:12.791: INFO: stderr: "" Mar 24 00:34:12.791: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3061\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-jjg7j\n" Mar 24 00:34:12.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3061' Mar 24 00:34:12.911: INFO: stderr: "" Mar 24 00:34:12.911: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3061\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.34.46\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.252:6379\nSession Affinity: None\nEvents: \n" Mar 24 00:34:12.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 24 00:34:13.061: INFO: stderr: "" Mar 24 00:34:13.061: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Tue, 24 Mar 2020 00:34:06 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 24 Mar 2020 00:33:19 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 24 Mar 2020 00:33:19 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 24 Mar 2020 00:33:19 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 24 Mar 2020 00:33:19 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 8d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 8d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 8d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 8d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 24 00:34:13.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-3061' Mar 24 00:34:13.162: INFO: stderr: "" Mar 24 00:34:13.162: INFO: stdout: "Name: kubectl-3061\nLabels: e2e-framework=kubectl\n e2e-run=a59f3c36-28e9-4a60-9975-a3d03ff1cc12\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:13.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3061" for this suite. • [SLOW TEST:5.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":202,"skipped":3348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:13.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-8acdfa4f-b3e8-47a5-81f3-43544700e9e4 STEP: Creating a pod to test consume configMaps Mar 24 00:34:13.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e" in namespace "projected-2784" to be "Succeeded or Failed" Mar 24 00:34:13.277: INFO: Pod "pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.149547ms Mar 24 00:34:15.281: INFO: Pod "pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034869195s Mar 24 00:34:17.285: INFO: Pod "pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039186106s STEP: Saw pod success Mar 24 00:34:17.285: INFO: Pod "pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e" satisfied condition "Succeeded or Failed" Mar 24 00:34:17.289: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:34:17.315: INFO: Waiting for pod pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e to disappear Mar 24 00:34:17.337: INFO: Pod pod-projected-configmaps-31f007ae-42ca-44e9-bfb6-303ac4ad9c6e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:17.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2784" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3402,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:17.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 24 00:34:17.406: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:17.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1306" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":204,"skipped":3421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:17.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:23.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7418" for this suite. • [SLOW TEST:5.664 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":205,"skipped":3452,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:23.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:34:23.240: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01" in namespace "projected-4163" to be "Succeeded or Failed" Mar 24 00:34:23.255: INFO: Pod "downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01": Phase="Pending", Reason="", readiness=false. Elapsed: 15.420308ms Mar 24 00:34:25.259: INFO: Pod "downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018618334s Mar 24 00:34:27.263: INFO: Pod "downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022714921s STEP: Saw pod success Mar 24 00:34:27.263: INFO: Pod "downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01" satisfied condition "Succeeded or Failed" Mar 24 00:34:27.266: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01 container client-container: STEP: delete the pod Mar 24 00:34:27.298: INFO: Waiting for pod downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01 to disappear Mar 24 00:34:27.314: INFO: Pod downwardapi-volume-9e8e55d9-94b8-4731-96b9-8456047adc01 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:34:27.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4163" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3465,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:34:27.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-1d7bcda6-53a3-4f43-ab38-e93af41c4ff9 in namespace container-probe-5440 Mar 24 00:34:31.402: INFO: Started pod liveness-1d7bcda6-53a3-4f43-ab38-e93af41c4ff9 in namespace container-probe-5440 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 00:34:31.404: INFO: Initial restart count of pod liveness-1d7bcda6-53a3-4f43-ab38-e93af41c4ff9 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:38:32.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5440" for this suite. • [SLOW TEST:245.038 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3482,"failed":0} SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:38:32.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-6d92482b-1672-4848-923d-5db64b9bcb7d STEP: Creating secret with name secret-projected-all-test-volume-664a0a3a-3606-4ecf-b822-48129bc43d78 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 24 00:38:32.441: INFO: Waiting up to 5m0s for pod "projected-volume-cb42e3a6-94d6-4f33-b494-752105085702" in namespace "projected-4934" to be "Succeeded or Failed" Mar 24 00:38:32.462: INFO: Pod "projected-volume-cb42e3a6-94d6-4f33-b494-752105085702": Phase="Pending", Reason="", readiness=false. Elapsed: 20.709867ms Mar 24 00:38:34.466: INFO: Pod "projected-volume-cb42e3a6-94d6-4f33-b494-752105085702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024796034s Mar 24 00:38:36.470: INFO: Pod "projected-volume-cb42e3a6-94d6-4f33-b494-752105085702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028894349s STEP: Saw pod success Mar 24 00:38:36.470: INFO: Pod "projected-volume-cb42e3a6-94d6-4f33-b494-752105085702" satisfied condition "Succeeded or Failed" Mar 24 00:38:36.473: INFO: Trying to get logs from node latest-worker pod projected-volume-cb42e3a6-94d6-4f33-b494-752105085702 container projected-all-volume-test: STEP: delete the pod Mar 24 00:38:36.513: INFO: Waiting for pod projected-volume-cb42e3a6-94d6-4f33-b494-752105085702 to disappear Mar 24 00:38:36.522: INFO: Pod projected-volume-cb42e3a6-94d6-4f33-b494-752105085702 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:38:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4934" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3484,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:38:36.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 24 00:38:36.601: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:36.606: INFO: Number of nodes with available pods: 0 Mar 24 00:38:36.606: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:38:37.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:37.612: INFO: Number of nodes with available pods: 0 Mar 24 00:38:37.612: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:38:38.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:38.614: INFO: Number of nodes with available pods: 0 Mar 24 00:38:38.614: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:38:39.610: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:39.613: INFO: Number of nodes with available pods: 0 Mar 24 00:38:39.613: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:38:40.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:40.615: INFO: Number of nodes with available pods: 2 Mar 24 00:38:40.615: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 24 00:38:40.637: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:38:40.642: INFO: Number of nodes with available pods: 2 Mar 24 00:38:40.642: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-723, will wait for the garbage collector to delete the pods Mar 24 00:38:41.794: INFO: Deleting DaemonSet.extensions daemon-set took: 16.856198ms Mar 24 00:38:42.194: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.199983ms Mar 24 00:38:53.008: INFO: Number of nodes with available pods: 0 Mar 24 00:38:53.008: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 00:38:53.010: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-723/daemonsets","resourceVersion":"2288585"},"items":null} Mar 24 00:38:53.013: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-723/pods","resourceVersion":"2288585"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:38:53.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-723" for this suite. • [SLOW TEST:16.514 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":209,"skipped":3493,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:38:53.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 24 00:38:53.218: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288591 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:38:53.218: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288594 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:38:53.218: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288596 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 24 00:39:03.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288647 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:39:03.250: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288648 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:39:03.250: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8795 /api/v1/namespaces/watch-8795/configmaps/e2e-watch-test-label-changed fcb761c6-85e7-45cd-9eff-0f070f556c9f 2288649 0 2020-03-24 00:38:53 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:39:03.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8795" for this suite. • [SLOW TEST:10.214 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":210,"skipped":3509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:39:03.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 24 00:39:03.316: INFO: Waiting up to 5m0s for pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426" in namespace "containers-6960" to be "Succeeded or Failed" Mar 24 00:39:03.373: INFO: Pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426": Phase="Pending", Reason="", readiness=false. Elapsed: 56.907838ms Mar 24 00:39:05.377: INFO: Pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061337719s Mar 24 00:39:07.381: INFO: Pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426": Phase="Running", Reason="", readiness=true. Elapsed: 4.065497894s Mar 24 00:39:09.386: INFO: Pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069665514s STEP: Saw pod success Mar 24 00:39:09.386: INFO: Pod "client-containers-e9f74ce4-c336-46fa-a567-42390737e426" satisfied condition "Succeeded or Failed" Mar 24 00:39:09.389: INFO: Trying to get logs from node latest-worker2 pod client-containers-e9f74ce4-c336-46fa-a567-42390737e426 container test-container: STEP: delete the pod Mar 24 00:39:09.434: INFO: Waiting for pod client-containers-e9f74ce4-c336-46fa-a567-42390737e426 to disappear Mar 24 00:39:09.452: INFO: Pod client-containers-e9f74ce4-c336-46fa-a567-42390737e426 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:39:09.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6960" for this suite. • [SLOW TEST:6.200 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3537,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:39:09.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6972.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6972.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6972.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6972.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6972.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6972.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:39:15.584: INFO: DNS probes using dns-6972/dns-test-6cbd78e3-3f43-4c58-b494-6f1159dd38f7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:39:15.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6972" for this suite. • [SLOW TEST:6.254 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":212,"skipped":3539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:39:15.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-7ea70bab-13e2-4bde-b187-cd21f8c216b5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:39:22.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9955" for this suite. • [SLOW TEST:6.371 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3575,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:39:22.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:39:22.155: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.050378ms) Mar 24 00:39:22.192: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 37.190708ms) Mar 24 00:39:22.196: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.163784ms) Mar 24 00:39:22.200: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.517296ms) Mar 24 00:39:22.203: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.939514ms) Mar 24 00:39:22.206: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.470474ms) Mar 24 00:39:22.209: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.909553ms) Mar 24 00:39:22.212: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.666513ms) Mar 24 00:39:22.215: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.233993ms) Mar 24 00:39:22.218: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.929129ms) Mar 24 00:39:22.221: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.127878ms) Mar 24 00:39:22.225: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.47926ms) Mar 24 00:39:22.228: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.195761ms) Mar 24 00:39:22.234: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.891731ms) Mar 24 00:39:22.237: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.113432ms) Mar 24 00:39:22.240: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.861305ms) Mar 24 00:39:22.243: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.090873ms) Mar 24 00:39:22.246: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.053626ms) Mar 24 00:39:22.250: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.296581ms) Mar 24 00:39:22.253: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.396179ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:39:22.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-627" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":214,"skipped":3580,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:39:22.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-420e759a-8176-489e-8fae-c3b07d80bdc2 STEP: Creating configMap with name cm-test-opt-upd-f0f96d1f-dd84-44f5-9f45-f5bf67659548 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-420e759a-8176-489e-8fae-c3b07d80bdc2 STEP: Updating configmap cm-test-opt-upd-f0f96d1f-dd84-44f5-9f45-f5bf67659548 STEP: Creating configMap with name cm-test-opt-create-1b2bf760-0ff0-4bf5-812a-7af73b8c09a8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:40:41.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9478" for this suite. • [SLOW TEST:78.847 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3582,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:40:41.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 24 00:40:41.182: INFO: Waiting up to 5m0s for pod "client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1" in namespace "containers-577" to be "Succeeded or Failed" Mar 24 00:40:41.196: INFO: Pod "client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.473593ms Mar 24 00:40:43.200: INFO: Pod "client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017824315s Mar 24 00:40:45.204: INFO: Pod "client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022016666s STEP: Saw pod success Mar 24 00:40:45.204: INFO: Pod "client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1" satisfied condition "Succeeded or Failed" Mar 24 00:40:45.208: INFO: Trying to get logs from node latest-worker pod client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1 container test-container: STEP: delete the pod Mar 24 00:40:45.232: INFO: Waiting for pod client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1 to disappear Mar 24 00:40:45.244: INFO: Pod client-containers-8a77341d-e6b1-4189-9152-4e038896f6c1 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:40:45.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-577" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":216,"skipped":3593,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:40:45.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:40:45.340: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:40:47.344: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:40:49.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:40:51.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:40:53.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:40:55.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:40:57.344: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:40:59.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:41:01.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:41:03.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:41:05.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = false) Mar 24 00:41:07.345: INFO: The status of Pod test-webserver-cb0dd41e-606d-4b0a-99e7-837ed8f2fa65 is Running (Ready = true) Mar 24 00:41:07.348: INFO: Container started at 2020-03-24 00:40:47 +0000 UTC, pod became ready at 2020-03-24 00:41:06 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:41:07.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8499" for this suite. • [SLOW TEST:22.104 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3615,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:41:07.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6050, will wait for the garbage collector to delete the pods Mar 24 00:41:11.476: INFO: Deleting Job.batch foo took: 6.020738ms Mar 24 00:41:11.776: INFO: Terminating Job.batch foo pods took: 300.25733ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:41:52.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6050" for this suite. • [SLOW TEST:45.450 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":218,"skipped":3621,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:41:52.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:41:53.647: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:41:55.668: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607313, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607313, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607313, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607313, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:41:58.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:41:58.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:41:59.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3735" for this suite. STEP: Destroying namespace "webhook-3735-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.075 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":219,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:41:59.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 24 00:41:59.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8807' Mar 24 00:42:00.225: INFO: stderr: "" Mar 24 00:42:00.225: INFO: stdout: "pod/pause created\n" Mar 24 00:42:00.225: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 24 00:42:00.225: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8807" to be "running and ready" Mar 24 00:42:00.257: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 31.708979ms Mar 24 00:42:02.261: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036093217s Mar 24 00:42:04.266: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.040448535s Mar 24 00:42:04.266: INFO: Pod "pause" satisfied condition "running and ready" Mar 24 00:42:04.266: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 24 00:42:04.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8807' Mar 24 00:42:04.379: INFO: stderr: "" Mar 24 00:42:04.379: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 24 00:42:04.379: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8807' Mar 24 00:42:04.494: INFO: stderr: "" Mar 24 00:42:04.494: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 24 00:42:04.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8807' Mar 24 00:42:04.596: INFO: stderr: "" Mar 24 00:42:04.596: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 24 00:42:04.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8807' Mar 24 00:42:04.683: INFO: stderr: "" Mar 24 00:42:04.683: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 24 00:42:04.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8807' Mar 24 00:42:04.799: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:42:04.799: INFO: stdout: "pod \"pause\" force deleted\n" Mar 24 00:42:04.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8807' Mar 24 00:42:05.116: INFO: stderr: "No resources found in kubectl-8807 namespace.\n" Mar 24 00:42:05.116: INFO: stdout: "" Mar 24 00:42:05.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8807 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 00:42:05.254: INFO: stderr: "" Mar 24 00:42:05.254: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:42:05.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8807" for this suite. • [SLOW TEST:5.380 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":220,"skipped":3654,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:42:05.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:42:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:42:09.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7520" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3668,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:42:09.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:42:13.630: INFO: DNS probes using dns-test-c7c86f13-dc40-43dd-9a6f-e6903545470c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:42:19.728: INFO: File wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local from pod dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 00:42:19.732: INFO: Lookups using dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb failed for: [wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local] Mar 24 00:42:24.740: INFO: File jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local from pod dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 00:42:24.740: INFO: Lookups using dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb failed for: [jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local] Mar 24 00:42:29.736: INFO: File wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local from pod dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 00:42:29.740: INFO: File jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local from pod dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 00:42:29.740: INFO: Lookups using dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb failed for: [wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local] Mar 24 00:42:34.758: INFO: File wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local from pod dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 24 00:42:34.762: INFO: Lookups using dns-7422/dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb failed for: [wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local] Mar 24 00:42:39.741: INFO: DNS probes using dns-test-b36ecb06-a697-4a83-b8c9-6a9ee5c0fdcb succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7422.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7422.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:42:46.363: INFO: DNS probes using dns-test-098af63d-d1e2-47bf-8264-a8bce2be89aa succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:42:46.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7422" for this suite. • [SLOW TEST:36.959 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":222,"skipped":3671,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:42:46.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:42:46.745: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0" in namespace "downward-api-8405" to be "Succeeded or Failed" Mar 24 00:42:46.877: INFO: Pod "downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0": Phase="Pending", Reason="", readiness=false. Elapsed: 132.218265ms Mar 24 00:42:48.882: INFO: Pod "downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13682594s Mar 24 00:42:50.886: INFO: Pod "downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14048778s STEP: Saw pod success Mar 24 00:42:50.886: INFO: Pod "downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0" satisfied condition "Succeeded or Failed" Mar 24 00:42:50.888: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0 container client-container: STEP: delete the pod Mar 24 00:42:50.937: INFO: Waiting for pod downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0 to disappear Mar 24 00:42:50.948: INFO: Pod downwardapi-volume-c974be5c-d7db-40ff-9b95-e5e0f3ef21a0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:42:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8405" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:42:50.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 24 00:42:59.844: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 00:42:59.869: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 00:43:01.869: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 00:43:01.873: INFO: Pod pod-with-poststart-http-hook still exists Mar 24 00:43:03.869: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 24 00:43:03.872: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:43:03.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3237" for this suite. • [SLOW TEST:12.925 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:43:03.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-388f6a41-17f0-4284-bba5-f3c8d9a2d8ed STEP: Creating secret with name s-test-opt-upd-a79653b4-c0c9-4605-becf-50c254d91d97 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-388f6a41-17f0-4284-bba5-f3c8d9a2d8ed STEP: Updating secret s-test-opt-upd-a79653b4-c0c9-4605-becf-50c254d91d97 STEP: Creating secret with name s-test-opt-create-ad2586c4-e5b2-49db-9d68-b1cb595d5f16 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:43:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9244" for this suite. • [SLOW TEST:12.244 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:43:16.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 24 00:43:16.213: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:16.230: INFO: Number of nodes with available pods: 0 Mar 24 00:43:16.230: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:43:17.234: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:17.237: INFO: Number of nodes with available pods: 0 Mar 24 00:43:17.237: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:43:18.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:18.403: INFO: Number of nodes with available pods: 0 Mar 24 00:43:18.404: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:43:19.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:19.238: INFO: Number of nodes with available pods: 0 Mar 24 00:43:19.238: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:43:20.235: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:20.238: INFO: Number of nodes with available pods: 2 Mar 24 00:43:20.238: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 24 00:43:20.266: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:20.268: INFO: Number of nodes with available pods: 1 Mar 24 00:43:20.269: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:21.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:21.276: INFO: Number of nodes with available pods: 1 Mar 24 00:43:21.276: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:22.351: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:22.379: INFO: Number of nodes with available pods: 1 Mar 24 00:43:22.379: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:23.273: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:23.277: INFO: Number of nodes with available pods: 1 Mar 24 00:43:23.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:24.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:24.278: INFO: Number of nodes with available pods: 1 Mar 24 00:43:24.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:25.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:25.278: INFO: Number of nodes with available pods: 1 Mar 24 00:43:25.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:26.285: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:26.288: INFO: Number of nodes with available pods: 1 Mar 24 00:43:26.288: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:27.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:27.278: INFO: Number of nodes with available pods: 1 Mar 24 00:43:27.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:28.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:28.277: INFO: Number of nodes with available pods: 1 Mar 24 00:43:28.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:29.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:29.278: INFO: Number of nodes with available pods: 1 Mar 24 00:43:29.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:30.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:30.277: INFO: Number of nodes with available pods: 1 Mar 24 00:43:30.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:31.272: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:31.274: INFO: Number of nodes with available pods: 1 Mar 24 00:43:31.274: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:32.315: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:32.363: INFO: Number of nodes with available pods: 1 Mar 24 00:43:32.363: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:33.292: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:33.295: INFO: Number of nodes with available pods: 1 Mar 24 00:43:33.295: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:34.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:34.277: INFO: Number of nodes with available pods: 1 Mar 24 00:43:34.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:35.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:35.276: INFO: Number of nodes with available pods: 1 Mar 24 00:43:35.276: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:36.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:36.278: INFO: Number of nodes with available pods: 1 Mar 24 00:43:36.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 24 00:43:37.274: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:43:37.278: INFO: Number of nodes with available pods: 2 Mar 24 00:43:37.278: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9788, will wait for the garbage collector to delete the pods Mar 24 00:43:37.339: INFO: Deleting DaemonSet.extensions daemon-set took: 6.507433ms Mar 24 00:43:37.640: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.236017ms Mar 24 00:43:43.057: INFO: Number of nodes with available pods: 0 Mar 24 00:43:43.057: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 00:43:43.059: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9788/daemonsets","resourceVersion":"2290134"},"items":null} Mar 24 00:43:43.062: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9788/pods","resourceVersion":"2290134"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:43:43.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9788" for this suite. • [SLOW TEST:26.952 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":226,"skipped":3800,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:43:43.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-9ff1de41-3ba9-4141-9b26-f4cf06e16bfa STEP: Creating a pod to test consume configMaps Mar 24 00:43:43.140: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14" in namespace "projected-8911" to be "Succeeded or Failed" Mar 24 00:43:43.151: INFO: Pod "pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14": Phase="Pending", Reason="", readiness=false. Elapsed: 10.639598ms Mar 24 00:43:45.155: INFO: Pod "pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014530633s Mar 24 00:43:47.159: INFO: Pod "pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018468686s STEP: Saw pod success Mar 24 00:43:47.159: INFO: Pod "pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14" satisfied condition "Succeeded or Failed" Mar 24 00:43:47.162: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14 container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:43:47.183: INFO: Waiting for pod pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14 to disappear Mar 24 00:43:47.195: INFO: Pod pod-projected-configmaps-7a7acd89-ab11-41b1-8e44-0500c49b8b14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:43:47.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8911" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3801,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:43:47.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:43:47.283: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 24 00:43:52.302: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 24 00:43:52.302: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 24 00:43:52.360: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8713 /apis/apps/v1/namespaces/deployment-8713/deployments/test-cleanup-deployment de2e8d7a-1e19-4207-9396-da17049940d5 2290220 1 2020-03-24 00:43:52 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007e4a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 24 00:43:52.390: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-8713 /apis/apps/v1/namespaces/deployment-8713/replicasets/test-cleanup-deployment-577c77b589 d58091b8-90d9-44f4-967f-3241565add41 2290223 1 2020-03-24 00:43:52 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment de2e8d7a-1e19-4207-9396-da17049940d5 0xc002b935e7 0xc002b935e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b93698 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:43:52.390: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 24 00:43:52.391: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8713 /apis/apps/v1/namespaces/deployment-8713/replicasets/test-cleanup-controller 02e5b917-c71c-4483-b3bc-d197426d1d3c 2290222 1 2020-03-24 00:43:47 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment de2e8d7a-1e19-4207-9396-da17049940d5 0xc002b9347f 0xc002b93490}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b934f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:43:52.442: INFO: Pod "test-cleanup-controller-tllvf" is available: &Pod{ObjectMeta:{test-cleanup-controller-tllvf test-cleanup-controller- deployment-8713 /api/v1/namespaces/deployment-8713/pods/test-cleanup-controller-tllvf 9ce1fec3-94fa-458c-b1f8-9a36ce340b3b 2290209 0 2020-03-24 00:43:47 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 02e5b917-c71c-4483-b3bc-d197426d1d3c 0xc003cc03a7 0xc003cc03a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-299gl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-299gl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-299gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:43:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:43:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.16,StartTime:2020-03-24 00:43:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-24 00:43:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9954fd408789c25fb6174f5e57b6061e65ea40a029faa4a16cc0b653055d4d66,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 24 00:43:52.442: INFO: Pod "test-cleanup-deployment-577c77b589-vd798" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-vd798 test-cleanup-deployment-577c77b589- deployment-8713 /api/v1/namespaces/deployment-8713/pods/test-cleanup-deployment-577c77b589-vd798 3fae1ea3-e296-4f7a-b702-53a1fc5297ac 2290229 0 2020-03-24 00:43:52 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 d58091b8-90d9-44f4-967f-3241565add41 0xc003cc0537 0xc003cc0538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-299gl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-299gl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-299gl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:43:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:43:52.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8713" for this suite. • [SLOW TEST:5.297 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":228,"skipped":3811,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:43:52.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3405.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3405.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3405.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:43:58.646: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.649: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.652: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.655: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.662: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.665: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.667: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.670: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:43:58.675: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:03.680: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.684: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.687: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.699: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.702: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.706: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.709: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:03.715: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:08.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.681: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.684: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.687: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.694: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.697: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.699: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.703: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:08.708: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:13.680: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.684: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.687: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.703: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.706: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.709: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:13.715: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:18.685: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.688: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.690: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.692: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.698: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.699: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.702: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.704: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:18.709: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:23.679: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.683: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.687: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.690: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.700: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.704: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.707: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.710: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local from pod dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce: the server could not find the requested resource (get pods dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce) Mar 24 00:44:23.716: INFO: Lookups using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3405.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3405.svc.cluster.local jessie_udp@dns-test-service-2.dns-3405.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3405.svc.cluster.local] Mar 24 00:44:28.724: INFO: DNS probes using dns-3405/dns-test-632ce2c8-a010-46e8-9a6d-09789e7f64ce succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:44:29.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3405" for this suite. • [SLOW TEST:36.613 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":229,"skipped":3831,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:44:29.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 24 00:44:34.010: INFO: Successfully updated pod "annotationupdate7c676e71-ae6d-4c1e-b8b3-34bb9993e8b8" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:44:38.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8502" for this suite. • [SLOW TEST:8.931 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":3844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:44:38.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4813.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4813.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4813.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4813.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:44:44.212: INFO: DNS probes using dns-4813/dns-test-99222118-ec72-4e4f-b100-faea509336ef succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:44:44.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4813" for this suite. • [SLOW TEST:6.309 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":231,"skipped":3901,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:44:44.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3177 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3177;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3177 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3177;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3177.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3177.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3177.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3177.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3177.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3177.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3177.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.138.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.138.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.138.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.138.141_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3177 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3177;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3177 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3177;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3177.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3177.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3177.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3177.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3177.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3177.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3177.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3177.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3177.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 141.138.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.138.141_udp@PTR;check="$$(dig +tcp +noall +answer +search 141.138.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.138.141_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 24 00:44:50.872: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.874: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.881: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.883: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.886: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.890: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.914: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.917: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.919: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.922: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.925: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.928: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.931: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.933: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:50.949: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:44:55.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.958: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.961: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:55.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.002: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.005: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.008: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.014: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.018: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.024: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.026: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:44:56.043: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:45:00.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.958: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.961: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.972: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.975: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.996: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:00.999: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.003: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.010: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.013: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:01.038: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:45:05.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.957: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.960: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.963: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.966: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.968: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.971: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.974: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.996: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:05.999: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.002: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.005: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.007: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.014: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:06.029: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:45:10.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.959: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.962: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.965: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.969: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.972: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.975: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.978: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:10.999: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.005: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.011: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.017: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:11.039: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:45:15.954: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.957: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.960: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.970: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.973: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.976: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:15.999: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.002: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.005: INFO: Unable to read jessie_udp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177 from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.012: INFO: Unable to read jessie_udp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.022: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc from pod dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad: the server could not find the requested resource (get pods dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad) Mar 24 00:45:16.081: INFO: Lookups using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3177 wheezy_tcp@dns-test-service.dns-3177 wheezy_udp@dns-test-service.dns-3177.svc wheezy_tcp@dns-test-service.dns-3177.svc wheezy_udp@_http._tcp.dns-test-service.dns-3177.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3177.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3177 jessie_tcp@dns-test-service.dns-3177 jessie_udp@dns-test-service.dns-3177.svc jessie_tcp@dns-test-service.dns-3177.svc jessie_udp@_http._tcp.dns-test-service.dns-3177.svc jessie_tcp@_http._tcp.dns-test-service.dns-3177.svc] Mar 24 00:45:21.222: INFO: DNS probes using dns-3177/dns-test-f88df4dc-aa1e-4a42-83f9-cb49511b8aad succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:45:21.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3177" for this suite. • [SLOW TEST:37.448 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":232,"skipped":3909,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:45:21.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 24 00:45:21.853: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 00:45:21.874: INFO: Waiting for terminating namespaces to be deleted... Mar 24 00:45:21.876: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 24 00:45:21.882: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:45:21.882: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:45:21.882: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:45:21.882: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 00:45:21.882: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 24 00:45:21.900: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:45:21.900: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:45:21.900: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:45:21.900: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15ff16da962a5227], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:45:22.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9311" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":233,"skipped":3920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:45:22.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:45:37.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9223" for this suite. • [SLOW TEST:14.092 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":234,"skipped":3946,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:45:37.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 24 00:45:37.125: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290849 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:45:37.125: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290849 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 24 00:45:47.133: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290916 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:45:47.133: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290916 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 24 00:45:57.141: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290946 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:45:57.141: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290946 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 24 00:46:07.151: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290976 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:46:07.152: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-a 0fb26734-665d-4cdc-998e-3f028134eb73 2290976 0 2020-03-24 00:45:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 24 00:46:17.167: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-b 1ae415cb-84e3-44c7-8e81-daa6857d646c 2291003 0 2020-03-24 00:46:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:46:17.167: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-b 1ae415cb-84e3-44c7-8e81-daa6857d646c 2291003 0 2020-03-24 00:46:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 24 00:46:27.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-b 1ae415cb-84e3-44c7-8e81-daa6857d646c 2291032 0 2020-03-24 00:46:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:46:27.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-508 /api/v1/namespaces/watch-508/configmaps/e2e-watch-test-configmap-b 1ae415cb-84e3-44c7-8e81-daa6857d646c 2291032 0 2020-03-24 00:46:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:46:37.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-508" for this suite. • [SLOW TEST:60.144 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":235,"skipped":3954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:46:37.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-73c73940-acf4-48ff-b068-7902cd931879 in namespace container-probe-6014 Mar 24 00:46:41.266: INFO: Started pod liveness-73c73940-acf4-48ff-b068-7902cd931879 in namespace container-probe-6014 STEP: checking the pod's current state and verifying that restartCount is present Mar 24 00:46:41.268: INFO: Initial restart count of pod liveness-73c73940-acf4-48ff-b068-7902cd931879 is 0 Mar 24 00:47:05.324: INFO: Restart count of pod container-probe-6014/liveness-73c73940-acf4-48ff-b068-7902cd931879 is now 1 (24.056079239s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:05.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6014" for this suite. • [SLOW TEST:28.205 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3985,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:05.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:47:05.695: INFO: Create a RollingUpdate DaemonSet Mar 24 00:47:05.699: INFO: Check that daemon pods launch on every node of the cluster Mar 24 00:47:05.710: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:05.721: INFO: Number of nodes with available pods: 0 Mar 24 00:47:05.721: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:47:06.727: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:06.730: INFO: Number of nodes with available pods: 0 Mar 24 00:47:06.730: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:47:07.726: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:07.729: INFO: Number of nodes with available pods: 0 Mar 24 00:47:07.729: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:47:08.725: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:08.728: INFO: Number of nodes with available pods: 0 Mar 24 00:47:08.728: INFO: Node latest-worker is running more than one daemon pod Mar 24 00:47:09.726: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:09.730: INFO: Number of nodes with available pods: 2 Mar 24 00:47:09.730: INFO: Number of running nodes: 2, number of available pods: 2 Mar 24 00:47:09.730: INFO: Update the DaemonSet to trigger a rollout Mar 24 00:47:09.737: INFO: Updating DaemonSet daemon-set Mar 24 00:47:13.755: INFO: Roll back the DaemonSet before rollout is complete Mar 24 00:47:13.761: INFO: Updating DaemonSet daemon-set Mar 24 00:47:13.761: INFO: Make sure DaemonSet rollback is complete Mar 24 00:47:13.766: INFO: Wrong image for pod: daemon-set-4tmdq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 24 00:47:13.766: INFO: Pod daemon-set-4tmdq is not available Mar 24 00:47:13.784: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:14.788: INFO: Wrong image for pod: daemon-set-4tmdq. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 24 00:47:14.788: INFO: Pod daemon-set-4tmdq is not available Mar 24 00:47:14.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 24 00:47:15.788: INFO: Pod daemon-set-cw77n is not available Mar 24 00:47:15.792: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2846, will wait for the garbage collector to delete the pods Mar 24 00:47:15.858: INFO: Deleting DaemonSet.extensions daemon-set took: 6.180057ms Mar 24 00:47:16.359: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.307125ms Mar 24 00:47:19.062: INFO: Number of nodes with available pods: 0 Mar 24 00:47:19.062: INFO: Number of running nodes: 0, number of available pods: 0 Mar 24 00:47:19.084: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2846/daemonsets","resourceVersion":"2291297"},"items":null} Mar 24 00:47:19.087: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2846/pods","resourceVersion":"2291297"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:19.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2846" for this suite. • [SLOW TEST:13.713 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":237,"skipped":4007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:19.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-e6202f25-0236-4d09-b3bb-bfc52617671e STEP: Creating a pod to test consume configMaps Mar 24 00:47:19.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43" in namespace "configmap-8410" to be "Succeeded or Failed" Mar 24 00:47:19.176: INFO: Pod "pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946173ms Mar 24 00:47:21.180: INFO: Pod "pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007041618s Mar 24 00:47:23.185: INFO: Pod "pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011551245s STEP: Saw pod success Mar 24 00:47:23.185: INFO: Pod "pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43" satisfied condition "Succeeded or Failed" Mar 24 00:47:23.188: INFO: Trying to get logs from node latest-worker pod pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43 container configmap-volume-test: STEP: delete the pod Mar 24 00:47:23.238: INFO: Waiting for pod pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43 to disappear Mar 24 00:47:23.242: INFO: Pod pod-configmaps-8a9cb10f-b9f8-4668-ba8a-5039ef4dba43 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:23.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8410" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":4041,"failed":0} SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:23.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 24 00:47:23.842: INFO: created pod pod-service-account-defaultsa Mar 24 00:47:23.842: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 24 00:47:23.862: INFO: created pod pod-service-account-mountsa Mar 24 00:47:23.862: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 24 00:47:23.883: INFO: created pod pod-service-account-nomountsa Mar 24 00:47:23.883: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 24 00:47:23.910: INFO: created pod pod-service-account-defaultsa-mountspec Mar 24 00:47:23.910: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 24 00:47:23.948: INFO: created pod pod-service-account-mountsa-mountspec Mar 24 00:47:23.948: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 24 00:47:23.958: INFO: created pod pod-service-account-nomountsa-mountspec Mar 24 00:47:23.958: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 24 00:47:23.995: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 24 00:47:23.995: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 24 00:47:24.016: INFO: created pod pod-service-account-mountsa-nomountspec Mar 24 00:47:24.016: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 24 00:47:24.059: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 24 00:47:24.059: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:24.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8715" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":239,"skipped":4051,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:24.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:47:25.624: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:47:27.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607646, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:47:29.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607646, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:47:31.732: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607646, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:47:33.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607646, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:47:35.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607646, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607645, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:47:38.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:47:38.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5172-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:39.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3639" for this suite. STEP: Destroying namespace "webhook-3639-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.691 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":240,"skipped":4051,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:39.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:47:55.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6289" for this suite. STEP: Destroying namespace "nsdeletetest-1347" for this suite. Mar 24 00:47:55.214: INFO: Namespace nsdeletetest-1347 was already deleted STEP: Destroying namespace "nsdeletetest-1448" for this suite. • [SLOW TEST:15.339 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":241,"skipped":4052,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:47:55.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:47:56.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:47:58.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:48:00.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607676, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:48:03.163: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:48:13.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7349" for this suite. STEP: Destroying namespace "webhook-7349-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.180 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":242,"skipped":4053,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:48:13.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:48:14.478: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:48:16.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607694, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607694, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607694, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607694, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:48:19.510: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 24 00:48:19.533: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:48:19.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2014" for this suite. STEP: Destroying namespace "webhook-2014-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.235 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":243,"skipped":4053,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:48:19.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7468 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 24 00:48:19.725: INFO: Found 0 stateful pods, waiting for 3 Mar 24 00:48:29.730: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:48:29.730: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:48:29.730: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 24 00:48:29.756: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 24 00:48:39.836: INFO: Updating stateful set ss2 Mar 24 00:48:39.842: INFO: Waiting for Pod statefulset-7468/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 24 00:48:49.850: INFO: Waiting for Pod statefulset-7468/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 24 00:48:59.988: INFO: Found 2 stateful pods, waiting for 3 Mar 24 00:49:09.994: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:49:09.994: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 24 00:49:09.994: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 24 00:49:10.019: INFO: Updating stateful set ss2 Mar 24 00:49:10.031: INFO: Waiting for Pod statefulset-7468/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 24 00:49:20.039: INFO: Waiting for Pod statefulset-7468/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 24 00:49:30.057: INFO: Updating stateful set ss2 Mar 24 00:49:30.087: INFO: Waiting for StatefulSet statefulset-7468/ss2 to complete update Mar 24 00:49:30.087: INFO: Waiting for Pod statefulset-7468/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 24 00:49:40.095: INFO: Waiting for StatefulSet statefulset-7468/ss2 to complete update Mar 24 00:49:40.095: INFO: Waiting for Pod statefulset-7468/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 24 00:49:50.098: INFO: Deleting all statefulset in ns statefulset-7468 Mar 24 00:49:50.101: INFO: Scaling statefulset ss2 to 0 Mar 24 00:50:20.119: INFO: Waiting for statefulset status.replicas updated to 0 Mar 24 00:50:20.123: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:50:20.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7468" for this suite. • [SLOW TEST:120.508 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":244,"skipped":4066,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:50:20.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 24 00:50:20.191: INFO: >>> kubeConfig: /root/.kube/config Mar 24 00:50:23.071: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:50:33.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6538" for this suite. • [SLOW TEST:13.405 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":245,"skipped":4073,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:50:33.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:50:33.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08" in namespace "downward-api-3584" to be "Succeeded or Failed" Mar 24 00:50:33.620: INFO: Pod "downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08": Phase="Pending", Reason="", readiness=false. Elapsed: 3.870131ms Mar 24 00:50:35.624: INFO: Pod "downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0079196s Mar 24 00:50:37.628: INFO: Pod "downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012057977s STEP: Saw pod success Mar 24 00:50:37.628: INFO: Pod "downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08" satisfied condition "Succeeded or Failed" Mar 24 00:50:37.632: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08 container client-container: STEP: delete the pod Mar 24 00:50:37.699: INFO: Waiting for pod downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08 to disappear Mar 24 00:50:37.704: INFO: Pod downwardapi-volume-437e1ff6-97ff-4dbd-8038-5e3b18fc7f08 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:50:37.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3584" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:50:37.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8554 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 24 00:50:37.781: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 24 00:50:37.845: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:50:39.849: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 24 00:50:41.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:43.850: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:45.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:47.850: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:49.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:51.849: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:53.850: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:55.851: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 24 00:50:57.850: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 24 00:50:57.855: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 24 00:51:01.875: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.39:8080/dial?request=hostname&protocol=http&host=10.244.2.38&port=8080&tries=1'] Namespace:pod-network-test-8554 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:51:01.875: INFO: >>> kubeConfig: /root/.kube/config I0324 00:51:01.915127 7 log.go:172] (0xc001c1e580) (0xc000b81220) Create stream I0324 00:51:01.915153 7 log.go:172] (0xc001c1e580) (0xc000b81220) Stream added, broadcasting: 1 I0324 00:51:01.918082 7 log.go:172] (0xc001c1e580) Reply frame received for 1 I0324 00:51:01.918147 7 log.go:172] (0xc001c1e580) (0xc000b817c0) Create stream I0324 00:51:01.918189 7 log.go:172] (0xc001c1e580) (0xc000b817c0) Stream added, broadcasting: 3 I0324 00:51:01.919345 7 log.go:172] (0xc001c1e580) Reply frame received for 3 I0324 00:51:01.919389 7 log.go:172] (0xc001c1e580) (0xc0016bbe00) Create stream I0324 00:51:01.919409 7 log.go:172] (0xc001c1e580) (0xc0016bbe00) Stream added, broadcasting: 5 I0324 00:51:01.920381 7 log.go:172] (0xc001c1e580) Reply frame received for 5 I0324 00:51:02.005985 7 log.go:172] (0xc001c1e580) Data frame received for 3 I0324 00:51:02.006030 7 log.go:172] (0xc000b817c0) (3) Data frame handling I0324 00:51:02.006072 7 log.go:172] (0xc000b817c0) (3) Data frame sent I0324 00:51:02.006338 7 log.go:172] (0xc001c1e580) Data frame received for 5 I0324 00:51:02.006359 7 log.go:172] (0xc0016bbe00) (5) Data frame handling I0324 00:51:02.006406 7 log.go:172] (0xc001c1e580) Data frame received for 3 I0324 00:51:02.006431 7 log.go:172] (0xc000b817c0) (3) Data frame handling I0324 00:51:02.008312 7 log.go:172] (0xc001c1e580) Data frame received for 1 I0324 00:51:02.008329 7 log.go:172] (0xc000b81220) (1) Data frame handling I0324 00:51:02.008346 7 log.go:172] (0xc000b81220) (1) Data frame sent I0324 00:51:02.008361 7 log.go:172] (0xc001c1e580) (0xc000b81220) Stream removed, broadcasting: 1 I0324 00:51:02.008401 7 log.go:172] (0xc001c1e580) Go away received I0324 00:51:02.008426 7 log.go:172] (0xc001c1e580) (0xc000b81220) Stream removed, broadcasting: 1 I0324 00:51:02.008438 7 log.go:172] (0xc001c1e580) (0xc000b817c0) Stream removed, broadcasting: 3 I0324 00:51:02.008447 7 log.go:172] (0xc001c1e580) (0xc0016bbe00) Stream removed, broadcasting: 5 Mar 24 00:51:02.008: INFO: Waiting for responses: map[] Mar 24 00:51:02.011: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.39:8080/dial?request=hostname&protocol=http&host=10.244.1.163&port=8080&tries=1'] Namespace:pod-network-test-8554 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 24 00:51:02.011: INFO: >>> kubeConfig: /root/.kube/config I0324 00:51:02.047807 7 log.go:172] (0xc0057473f0) (0xc002460780) Create stream I0324 00:51:02.047829 7 log.go:172] (0xc0057473f0) (0xc002460780) Stream added, broadcasting: 1 I0324 00:51:02.050629 7 log.go:172] (0xc0057473f0) Reply frame received for 1 I0324 00:51:02.050707 7 log.go:172] (0xc0057473f0) (0xc002a6c000) Create stream I0324 00:51:02.050735 7 log.go:172] (0xc0057473f0) (0xc002a6c000) Stream added, broadcasting: 3 I0324 00:51:02.052358 7 log.go:172] (0xc0057473f0) Reply frame received for 3 I0324 00:51:02.052411 7 log.go:172] (0xc0057473f0) (0xc002460b40) Create stream I0324 00:51:02.052428 7 log.go:172] (0xc0057473f0) (0xc002460b40) Stream added, broadcasting: 5 I0324 00:51:02.053749 7 log.go:172] (0xc0057473f0) Reply frame received for 5 I0324 00:51:02.112039 7 log.go:172] (0xc0057473f0) Data frame received for 3 I0324 00:51:02.112077 7 log.go:172] (0xc002a6c000) (3) Data frame handling I0324 00:51:02.112107 7 log.go:172] (0xc002a6c000) (3) Data frame sent I0324 00:51:02.112325 7 log.go:172] (0xc0057473f0) Data frame received for 3 I0324 00:51:02.112346 7 log.go:172] (0xc002a6c000) (3) Data frame handling I0324 00:51:02.112537 7 log.go:172] (0xc0057473f0) Data frame received for 5 I0324 00:51:02.112574 7 log.go:172] (0xc002460b40) (5) Data frame handling I0324 00:51:02.114087 7 log.go:172] (0xc0057473f0) Data frame received for 1 I0324 00:51:02.114117 7 log.go:172] (0xc002460780) (1) Data frame handling I0324 00:51:02.114129 7 log.go:172] (0xc002460780) (1) Data frame sent I0324 00:51:02.114141 7 log.go:172] (0xc0057473f0) (0xc002460780) Stream removed, broadcasting: 1 I0324 00:51:02.114228 7 log.go:172] (0xc0057473f0) Go away received I0324 00:51:02.114275 7 log.go:172] (0xc0057473f0) (0xc002460780) Stream removed, broadcasting: 1 I0324 00:51:02.114300 7 log.go:172] (0xc0057473f0) (0xc002a6c000) Stream removed, broadcasting: 3 I0324 00:51:02.114314 7 log.go:172] (0xc0057473f0) (0xc002460b40) Stream removed, broadcasting: 5 Mar 24 00:51:02.114: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:02.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8554" for this suite. • [SLOW TEST:24.411 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:02.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:02.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1590" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":248,"skipped":4189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:02.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:51:02.257: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 24 00:51:04.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-698 create -f -' Mar 24 00:51:07.410: INFO: stderr: "" Mar 24 00:51:07.410: INFO: stdout: "e2e-test-crd-publish-openapi-4006-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 24 00:51:07.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-698 delete e2e-test-crd-publish-openapi-4006-crds test-cr' Mar 24 00:51:07.559: INFO: stderr: "" Mar 24 00:51:07.559: INFO: stdout: "e2e-test-crd-publish-openapi-4006-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 24 00:51:07.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-698 apply -f -' Mar 24 00:51:08.141: INFO: stderr: "" Mar 24 00:51:08.141: INFO: stdout: "e2e-test-crd-publish-openapi-4006-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 24 00:51:08.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-698 delete e2e-test-crd-publish-openapi-4006-crds test-cr' Mar 24 00:51:08.302: INFO: stderr: "" Mar 24 00:51:08.302: INFO: stdout: "e2e-test-crd-publish-openapi-4006-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 24 00:51:08.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4006-crds' Mar 24 00:51:09.440: INFO: stderr: "" Mar 24 00:51:09.440: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4006-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:12.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-698" for this suite. • [SLOW TEST:10.158 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":249,"skipped":4239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:12.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ecfbfe4b-709f-4dc4-963a-526d0e8bba75 STEP: Creating a pod to test consume configMaps Mar 24 00:51:12.419: INFO: Waiting up to 5m0s for pod "pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e" in namespace "configmap-9187" to be "Succeeded or Failed" Mar 24 00:51:12.446: INFO: Pod "pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.588597ms Mar 24 00:51:14.451: INFO: Pod "pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031149057s Mar 24 00:51:16.455: INFO: Pod "pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035669904s STEP: Saw pod success Mar 24 00:51:16.455: INFO: Pod "pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e" satisfied condition "Succeeded or Failed" Mar 24 00:51:16.458: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e container configmap-volume-test: STEP: delete the pod Mar 24 00:51:16.491: INFO: Waiting for pod pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e to disappear Mar 24 00:51:16.518: INFO: Pod pod-configmaps-0eab4599-9803-4f7c-8bec-d999c653917e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:16.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9187" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:16.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 24 00:51:17.112: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 24 00:51:19.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607877, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607877, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607877, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607877, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 24 00:51:22.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:22.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1304" for this suite. STEP: Destroying namespace "webhook-1304-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":251,"skipped":4294,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:22.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:51:22.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd" in namespace "projected-9371" to be "Succeeded or Failed" Mar 24 00:51:22.705: INFO: Pod "downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.512402ms Mar 24 00:51:24.710: INFO: Pod "downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009341921s Mar 24 00:51:26.714: INFO: Pod "downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013374891s STEP: Saw pod success Mar 24 00:51:26.714: INFO: Pod "downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd" satisfied condition "Succeeded or Failed" Mar 24 00:51:26.717: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd container client-container: STEP: delete the pod Mar 24 00:51:26.735: INFO: Waiting for pod downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd to disappear Mar 24 00:51:26.776: INFO: Pod downwardapi-volume-3a899ae1-5e99-4531-a428-346223fbd7fd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:26.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9371" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4294,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:26.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 24 00:51:31.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9082 pod-service-account-14eb5c36-751c-4565-be62-0085bfab97fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 24 00:51:31.591: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9082 pod-service-account-14eb5c36-751c-4565-be62-0085bfab97fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 24 00:51:31.797: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9082 pod-service-account-14eb5c36-751c-4565-be62-0085bfab97fd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:32.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9082" for this suite. • [SLOW TEST:5.231 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":253,"skipped":4297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:32.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 24 00:51:32.084: INFO: Waiting up to 5m0s for pod "pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118" in namespace "emptydir-5512" to be "Succeeded or Failed" Mar 24 00:51:32.117: INFO: Pod "pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118": Phase="Pending", Reason="", readiness=false. Elapsed: 33.800735ms Mar 24 00:51:34.121: INFO: Pod "pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037547015s Mar 24 00:51:36.125: INFO: Pod "pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041849329s STEP: Saw pod success Mar 24 00:51:36.126: INFO: Pod "pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118" satisfied condition "Succeeded or Failed" Mar 24 00:51:36.129: INFO: Trying to get logs from node latest-worker2 pod pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118 container test-container: STEP: delete the pod Mar 24 00:51:36.143: INFO: Waiting for pod pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118 to disappear Mar 24 00:51:36.148: INFO: Pod pod-bc4fb8d0-1ca6-4d65-a77b-dbb122c91118 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:36.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5512" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4320,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:36.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 24 00:51:36.255: INFO: Waiting up to 5m0s for pod "var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed" in namespace "var-expansion-1234" to be "Succeeded or Failed" Mar 24 00:51:36.259: INFO: Pod "var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306001ms Mar 24 00:51:38.788: INFO: Pod "var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532359015s Mar 24 00:51:40.793: INFO: Pod "var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.537520346s STEP: Saw pod success Mar 24 00:51:40.793: INFO: Pod "var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed" satisfied condition "Succeeded or Failed" Mar 24 00:51:40.797: INFO: Trying to get logs from node latest-worker pod var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed container dapi-container: STEP: delete the pod Mar 24 00:51:40.835: INFO: Waiting for pod var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed to disappear Mar 24 00:51:40.849: INFO: Pod var-expansion-1c12e00b-103c-4e20-a1f8-317b17a09eed no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:40.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1234" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4322,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:40.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:51:40.900: INFO: Creating deployment "test-recreate-deployment" Mar 24 00:51:40.932: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 24 00:51:40.940: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 24 00:51:42.947: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 24 00:51:42.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607900, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607900, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607901, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63720607900, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 24 00:51:44.954: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 24 00:51:44.963: INFO: Updating deployment test-recreate-deployment Mar 24 00:51:44.963: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 24 00:51:45.412: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6619 /apis/apps/v1/namespaces/deployment-6619/deployments/test-recreate-deployment 1b7a3048-b5c1-4ce2-95be-7c4464c52d56 2293130 2 2020-03-24 00:51:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f79998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-24 00:51:45 +0000 UTC,LastTransitionTime:2020-03-24 00:51:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-24 00:51:45 +0000 UTC,LastTransitionTime:2020-03-24 00:51:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 24 00:51:45.493: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-6619 /apis/apps/v1/namespaces/deployment-6619/replicasets/test-recreate-deployment-5f94c574ff 1e9b7b7c-c351-4e8d-a43f-0be2530a2911 2293128 1 2020-03-24 00:51:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 1b7a3048-b5c1-4ce2-95be-7c4464c52d56 0xc00349a117 0xc00349a118}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00349a1f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:51:45.493: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 24 00:51:45.493: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-6619 /apis/apps/v1/namespaces/deployment-6619/replicasets/test-recreate-deployment-846c7dd955 23c4f72c-9a48-407c-863d-ef4fbcda116c 2293119 2 2020-03-24 00:51:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 1b7a3048-b5c1-4ce2-95be-7c4464c52d56 0xc00349a277 0xc00349a278}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00349a328 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 24 00:51:45.526: INFO: Pod "test-recreate-deployment-5f94c574ff-8bl2l" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-8bl2l test-recreate-deployment-5f94c574ff- deployment-6619 /api/v1/namespaces/deployment-6619/pods/test-recreate-deployment-5f94c574ff-8bl2l 9975cae8-6e8a-4206-9973-d3960e375c1c 2293132 0 2020-03-24 00:51:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 1e9b7b7c-c351-4e8d-a43f-0be2530a2911 0xc005fa08b7 0xc005fa08b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rt8fg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rt8fg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rt8fg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:51:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:51:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:51:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-24 00:51:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-24 00:51:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:45.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6619" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":256,"skipped":4338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:45.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 24 00:51:45.584: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 24 00:51:45.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:45.995: INFO: stderr: "" Mar 24 00:51:45.995: INFO: stdout: "service/agnhost-slave created\n" Mar 24 00:51:45.995: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 24 00:51:45.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:46.345: INFO: stderr: "" Mar 24 00:51:46.345: INFO: stdout: "service/agnhost-master created\n" Mar 24 00:51:46.345: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 24 00:51:46.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:46.834: INFO: stderr: "" Mar 24 00:51:46.835: INFO: stdout: "service/frontend created\n" Mar 24 00:51:46.835: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 24 00:51:46.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:47.323: INFO: stderr: "" Mar 24 00:51:47.323: INFO: stdout: "deployment.apps/frontend created\n" Mar 24 00:51:47.324: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 24 00:51:47.324: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:47.631: INFO: stderr: "" Mar 24 00:51:47.632: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 24 00:51:47.632: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 24 00:51:47.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3865' Mar 24 00:51:47.943: INFO: stderr: "" Mar 24 00:51:47.943: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 24 00:51:47.943: INFO: Waiting for all frontend pods to be Running. Mar 24 00:51:57.994: INFO: Waiting for frontend to serve content. Mar 24 00:51:58.004: INFO: Trying to add a new entry to the guestbook. Mar 24 00:51:58.014: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 24 00:51:58.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:58.212: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:58.212: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 24 00:51:58.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:58.365: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:58.365: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 24 00:51:58.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:58.518: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:58.518: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 24 00:51:58.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:58.624: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:58.624: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 24 00:51:58.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:58.709: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:58.709: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 24 00:51:58.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3865' Mar 24 00:51:59.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:51:59.057: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:51:59.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3865" for this suite. • [SLOW TEST:13.539 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":257,"skipped":4389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:51:59.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-2bb36e8f-b98f-4e6c-a150-a61a896f246b STEP: Creating a pod to test consume secrets Mar 24 00:51:59.506: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754" in namespace "projected-9413" to be "Succeeded or Failed" Mar 24 00:51:59.603: INFO: Pod "pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754": Phase="Pending", Reason="", readiness=false. Elapsed: 97.573539ms Mar 24 00:52:01.607: INFO: Pod "pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101154267s Mar 24 00:52:03.611: INFO: Pod "pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10519422s STEP: Saw pod success Mar 24 00:52:03.611: INFO: Pod "pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754" satisfied condition "Succeeded or Failed" Mar 24 00:52:03.614: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754 container projected-secret-volume-test: STEP: delete the pod Mar 24 00:52:03.642: INFO: Waiting for pod pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754 to disappear Mar 24 00:52:03.682: INFO: Pod pod-projected-secrets-32fc26c3-d624-44c4-8443-49f9c8dec754 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:03.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9413" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4415,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:03.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-1e6261b1-66e9-4a6f-9a4b-16cde12b3ece STEP: Creating a pod to test consume configMaps Mar 24 00:52:03.744: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a" in namespace "projected-8532" to be "Succeeded or Failed" Mar 24 00:52:03.747: INFO: Pod "pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.626885ms Mar 24 00:52:05.751: INFO: Pod "pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007554753s Mar 24 00:52:07.757: INFO: Pod "pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012992352s STEP: Saw pod success Mar 24 00:52:07.757: INFO: Pod "pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a" satisfied condition "Succeeded or Failed" Mar 24 00:52:07.760: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:52:07.820: INFO: Waiting for pod pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a to disappear Mar 24 00:52:07.825: INFO: Pod pod-projected-configmaps-377709d7-367d-4e3c-9655-def0b987318a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:07.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8532" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4415,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:07.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-5110 STEP: creating replication controller nodeport-test in namespace services-5110 I0324 00:52:07.901073 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-5110, replica count: 2 I0324 00:52:10.951595 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0324 00:52:13.951874 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 24 00:52:13.951: INFO: Creating new exec pod Mar 24 00:52:18.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5110 execpodnj6ft -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 24 00:52:19.191: INFO: stderr: "I0324 00:52:19.107638 3196 log.go:172] (0xc0000eea50) (0xc000651400) Create stream\nI0324 00:52:19.107696 3196 log.go:172] (0xc0000eea50) (0xc000651400) Stream added, broadcasting: 1\nI0324 00:52:19.109990 3196 log.go:172] (0xc0000eea50) Reply frame received for 1\nI0324 00:52:19.110044 3196 log.go:172] (0xc0000eea50) (0xc0006514a0) Create stream\nI0324 00:52:19.110065 3196 log.go:172] (0xc0000eea50) (0xc0006514a0) Stream added, broadcasting: 3\nI0324 00:52:19.110921 3196 log.go:172] (0xc0000eea50) Reply frame received for 3\nI0324 00:52:19.110958 3196 log.go:172] (0xc0000eea50) (0xc000944000) Create stream\nI0324 00:52:19.110974 3196 log.go:172] (0xc0000eea50) (0xc000944000) Stream added, broadcasting: 5\nI0324 00:52:19.111746 3196 log.go:172] (0xc0000eea50) Reply frame received for 5\nI0324 00:52:19.185029 3196 log.go:172] (0xc0000eea50) Data frame received for 3\nI0324 00:52:19.185082 3196 log.go:172] (0xc0006514a0) (3) Data frame handling\nI0324 00:52:19.185275 3196 log.go:172] (0xc0000eea50) Data frame received for 5\nI0324 00:52:19.185317 3196 log.go:172] (0xc000944000) (5) Data frame handling\nI0324 00:52:19.185354 3196 log.go:172] (0xc000944000) (5) Data frame sent\nI0324 00:52:19.185388 3196 log.go:172] (0xc0000eea50) Data frame received for 5\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0324 00:52:19.185416 3196 log.go:172] (0xc000944000) (5) Data frame handling\nI0324 00:52:19.186655 3196 log.go:172] (0xc0000eea50) Data frame received for 1\nI0324 00:52:19.186673 3196 log.go:172] (0xc000651400) (1) Data frame handling\nI0324 00:52:19.186679 3196 log.go:172] (0xc000651400) (1) Data frame sent\nI0324 00:52:19.186722 3196 log.go:172] (0xc0000eea50) (0xc000651400) Stream removed, broadcasting: 1\nI0324 00:52:19.186735 3196 log.go:172] (0xc0000eea50) Go away received\nI0324 00:52:19.187215 3196 log.go:172] (0xc0000eea50) (0xc000651400) Stream removed, broadcasting: 1\nI0324 00:52:19.187250 3196 log.go:172] (0xc0000eea50) (0xc0006514a0) Stream removed, broadcasting: 3\nI0324 00:52:19.187279 3196 log.go:172] (0xc0000eea50) (0xc000944000) Stream removed, broadcasting: 5\n" Mar 24 00:52:19.192: INFO: stdout: "" Mar 24 00:52:19.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5110 execpodnj6ft -- /bin/sh -x -c nc -zv -t -w 2 10.96.59.107 80' Mar 24 00:52:19.383: INFO: stderr: "I0324 00:52:19.311439 3218 log.go:172] (0xc00031c9a0) (0xc00061b4a0) Create stream\nI0324 00:52:19.311485 3218 log.go:172] (0xc00031c9a0) (0xc00061b4a0) Stream added, broadcasting: 1\nI0324 00:52:19.313820 3218 log.go:172] (0xc00031c9a0) Reply frame received for 1\nI0324 00:52:19.313877 3218 log.go:172] (0xc00031c9a0) (0xc0008ee000) Create stream\nI0324 00:52:19.313898 3218 log.go:172] (0xc00031c9a0) (0xc0008ee000) Stream added, broadcasting: 3\nI0324 00:52:19.314683 3218 log.go:172] (0xc00031c9a0) Reply frame received for 3\nI0324 00:52:19.314711 3218 log.go:172] (0xc00031c9a0) (0xc0003d0960) Create stream\nI0324 00:52:19.314724 3218 log.go:172] (0xc00031c9a0) (0xc0003d0960) Stream added, broadcasting: 5\nI0324 00:52:19.315570 3218 log.go:172] (0xc00031c9a0) Reply frame received for 5\nI0324 00:52:19.376383 3218 log.go:172] (0xc00031c9a0) Data frame received for 3\nI0324 00:52:19.376424 3218 log.go:172] (0xc0008ee000) (3) Data frame handling\nI0324 00:52:19.376465 3218 log.go:172] (0xc00031c9a0) Data frame received for 5\nI0324 00:52:19.376494 3218 log.go:172] (0xc0003d0960) (5) Data frame handling\nI0324 00:52:19.376526 3218 log.go:172] (0xc0003d0960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.59.107 80\nConnection to 10.96.59.107 80 port [tcp/http] succeeded!\nI0324 00:52:19.376703 3218 log.go:172] (0xc00031c9a0) Data frame received for 5\nI0324 00:52:19.376728 3218 log.go:172] (0xc0003d0960) (5) Data frame handling\nI0324 00:52:19.378558 3218 log.go:172] (0xc00031c9a0) Data frame received for 1\nI0324 00:52:19.378595 3218 log.go:172] (0xc00061b4a0) (1) Data frame handling\nI0324 00:52:19.378610 3218 log.go:172] (0xc00061b4a0) (1) Data frame sent\nI0324 00:52:19.378637 3218 log.go:172] (0xc00031c9a0) (0xc00061b4a0) Stream removed, broadcasting: 1\nI0324 00:52:19.378660 3218 log.go:172] (0xc00031c9a0) Go away received\nI0324 00:52:19.379097 3218 log.go:172] (0xc00031c9a0) (0xc00061b4a0) Stream removed, broadcasting: 1\nI0324 00:52:19.379120 3218 log.go:172] (0xc00031c9a0) (0xc0008ee000) Stream removed, broadcasting: 3\nI0324 00:52:19.379133 3218 log.go:172] (0xc00031c9a0) (0xc0003d0960) Stream removed, broadcasting: 5\n" Mar 24 00:52:19.383: INFO: stdout: "" Mar 24 00:52:19.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5110 execpodnj6ft -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31303' Mar 24 00:52:19.600: INFO: stderr: "I0324 00:52:19.518079 3239 log.go:172] (0xc0009960b0) (0xc000b12000) Create stream\nI0324 00:52:19.518135 3239 log.go:172] (0xc0009960b0) (0xc000b12000) Stream added, broadcasting: 1\nI0324 00:52:19.520867 3239 log.go:172] (0xc0009960b0) Reply frame received for 1\nI0324 00:52:19.520909 3239 log.go:172] (0xc0009960b0) (0xc0005d5720) Create stream\nI0324 00:52:19.520921 3239 log.go:172] (0xc0009960b0) (0xc0005d5720) Stream added, broadcasting: 3\nI0324 00:52:19.521993 3239 log.go:172] (0xc0009960b0) Reply frame received for 3\nI0324 00:52:19.522028 3239 log.go:172] (0xc0009960b0) (0xc0006b1360) Create stream\nI0324 00:52:19.522041 3239 log.go:172] (0xc0009960b0) (0xc0006b1360) Stream added, broadcasting: 5\nI0324 00:52:19.523185 3239 log.go:172] (0xc0009960b0) Reply frame received for 5\nI0324 00:52:19.593196 3239 log.go:172] (0xc0009960b0) Data frame received for 3\nI0324 00:52:19.593231 3239 log.go:172] (0xc0005d5720) (3) Data frame handling\nI0324 00:52:19.593259 3239 log.go:172] (0xc0009960b0) Data frame received for 5\nI0324 00:52:19.593269 3239 log.go:172] (0xc0006b1360) (5) Data frame handling\nI0324 00:52:19.593279 3239 log.go:172] (0xc0006b1360) (5) Data frame sent\nI0324 00:52:19.593286 3239 log.go:172] (0xc0009960b0) Data frame received for 5\nI0324 00:52:19.593293 3239 log.go:172] (0xc0006b1360) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31303\nConnection to 172.17.0.13 31303 port [tcp/31303] succeeded!\nI0324 00:52:19.595147 3239 log.go:172] (0xc0009960b0) Data frame received for 1\nI0324 00:52:19.595190 3239 log.go:172] (0xc000b12000) (1) Data frame handling\nI0324 00:52:19.595221 3239 log.go:172] (0xc000b12000) (1) Data frame sent\nI0324 00:52:19.595250 3239 log.go:172] (0xc0009960b0) (0xc000b12000) Stream removed, broadcasting: 1\nI0324 00:52:19.595276 3239 log.go:172] (0xc0009960b0) Go away received\nI0324 00:52:19.595791 3239 log.go:172] (0xc0009960b0) (0xc000b12000) Stream removed, broadcasting: 1\nI0324 00:52:19.595816 3239 log.go:172] (0xc0009960b0) (0xc0005d5720) Stream removed, broadcasting: 3\nI0324 00:52:19.595829 3239 log.go:172] (0xc0009960b0) (0xc0006b1360) Stream removed, broadcasting: 5\n" Mar 24 00:52:19.600: INFO: stdout: "" Mar 24 00:52:19.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5110 execpodnj6ft -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31303' Mar 24 00:52:19.851: INFO: stderr: "I0324 00:52:19.774285 3260 log.go:172] (0xc0003ca9a0) (0xc00066d400) Create stream\nI0324 00:52:19.774336 3260 log.go:172] (0xc0003ca9a0) (0xc00066d400) Stream added, broadcasting: 1\nI0324 00:52:19.777412 3260 log.go:172] (0xc0003ca9a0) Reply frame received for 1\nI0324 00:52:19.777462 3260 log.go:172] (0xc0003ca9a0) (0xc0008e0000) Create stream\nI0324 00:52:19.777477 3260 log.go:172] (0xc0003ca9a0) (0xc0008e0000) Stream added, broadcasting: 3\nI0324 00:52:19.778749 3260 log.go:172] (0xc0003ca9a0) Reply frame received for 3\nI0324 00:52:19.778790 3260 log.go:172] (0xc0003ca9a0) (0xc00066d4a0) Create stream\nI0324 00:52:19.778805 3260 log.go:172] (0xc0003ca9a0) (0xc00066d4a0) Stream added, broadcasting: 5\nI0324 00:52:19.779958 3260 log.go:172] (0xc0003ca9a0) Reply frame received for 5\nI0324 00:52:19.843924 3260 log.go:172] (0xc0003ca9a0) Data frame received for 5\nI0324 00:52:19.843970 3260 log.go:172] (0xc00066d4a0) (5) Data frame handling\nI0324 00:52:19.843994 3260 log.go:172] (0xc00066d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31303\nI0324 00:52:19.844016 3260 log.go:172] (0xc0003ca9a0) Data frame received for 5\nI0324 00:52:19.844034 3260 log.go:172] (0xc00066d4a0) (5) Data frame handling\nI0324 00:52:19.844061 3260 log.go:172] (0xc00066d4a0) (5) Data frame sent\nConnection to 172.17.0.12 31303 port [tcp/31303] succeeded!\nI0324 00:52:19.844779 3260 log.go:172] (0xc0003ca9a0) Data frame received for 5\nI0324 00:52:19.844804 3260 log.go:172] (0xc00066d4a0) (5) Data frame handling\nI0324 00:52:19.845105 3260 log.go:172] (0xc0003ca9a0) Data frame received for 3\nI0324 00:52:19.845223 3260 log.go:172] (0xc0008e0000) (3) Data frame handling\nI0324 00:52:19.846716 3260 log.go:172] (0xc0003ca9a0) Data frame received for 1\nI0324 00:52:19.846747 3260 log.go:172] (0xc00066d400) (1) Data frame handling\nI0324 00:52:19.846782 3260 log.go:172] (0xc00066d400) (1) Data frame sent\nI0324 00:52:19.846810 3260 log.go:172] (0xc0003ca9a0) (0xc00066d400) Stream removed, broadcasting: 1\nI0324 00:52:19.846839 3260 log.go:172] (0xc0003ca9a0) Go away received\nI0324 00:52:19.847280 3260 log.go:172] (0xc0003ca9a0) (0xc00066d400) Stream removed, broadcasting: 1\nI0324 00:52:19.847304 3260 log.go:172] (0xc0003ca9a0) (0xc0008e0000) Stream removed, broadcasting: 3\nI0324 00:52:19.847317 3260 log.go:172] (0xc0003ca9a0) (0xc00066d4a0) Stream removed, broadcasting: 5\n" Mar 24 00:52:19.852: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:19.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5110" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.026 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":260,"skipped":4423,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:19.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:52:19.919: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-453fe256-dda9-4e8c-99f4-b2e1e80e74aa" in namespace "security-context-test-7067" to be "Succeeded or Failed" Mar 24 00:52:19.923: INFO: Pod "alpine-nnp-false-453fe256-dda9-4e8c-99f4-b2e1e80e74aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.564949ms Mar 24 00:52:21.927: INFO: Pod "alpine-nnp-false-453fe256-dda9-4e8c-99f4-b2e1e80e74aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007634616s Mar 24 00:52:23.931: INFO: Pod "alpine-nnp-false-453fe256-dda9-4e8c-99f4-b2e1e80e74aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011849179s Mar 24 00:52:23.931: INFO: Pod "alpine-nnp-false-453fe256-dda9-4e8c-99f4-b2e1e80e74aa" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:23.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7067" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:23.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:52:23.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Mar 24 00:52:24.148: INFO: stderr: "" Mar 24 00:52:24.148: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-956" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":262,"skipped":4463,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:24.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 24 00:52:24.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7973' Mar 24 00:52:24.481: INFO: stderr: "" Mar 24 00:52:24.481: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 00:52:24.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:24.584: INFO: stderr: "" Mar 24 00:52:24.584: INFO: stdout: "update-demo-nautilus-9dnqd update-demo-nautilus-9qz6h " Mar 24 00:52:24.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:24.701: INFO: stderr: "" Mar 24 00:52:24.702: INFO: stdout: "" Mar 24 00:52:24.702: INFO: update-demo-nautilus-9dnqd is created but not running Mar 24 00:52:29.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:29.813: INFO: stderr: "" Mar 24 00:52:29.813: INFO: stdout: "update-demo-nautilus-9dnqd update-demo-nautilus-9qz6h " Mar 24 00:52:29.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:29.907: INFO: stderr: "" Mar 24 00:52:29.907: INFO: stdout: "true" Mar 24 00:52:29.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:30.007: INFO: stderr: "" Mar 24 00:52:30.007: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:52:30.007: INFO: validating pod update-demo-nautilus-9dnqd Mar 24 00:52:30.011: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:52:30.011: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:52:30.011: INFO: update-demo-nautilus-9dnqd is verified up and running Mar 24 00:52:30.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qz6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:30.108: INFO: stderr: "" Mar 24 00:52:30.108: INFO: stdout: "true" Mar 24 00:52:30.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qz6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:30.194: INFO: stderr: "" Mar 24 00:52:30.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:52:30.194: INFO: validating pod update-demo-nautilus-9qz6h Mar 24 00:52:30.197: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:52:30.197: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:52:30.197: INFO: update-demo-nautilus-9qz6h is verified up and running STEP: scaling down the replication controller Mar 24 00:52:30.199: INFO: scanned /root for discovery docs: Mar 24 00:52:30.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7973' Mar 24 00:52:31.330: INFO: stderr: "" Mar 24 00:52:31.330: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 00:52:31.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:31.433: INFO: stderr: "" Mar 24 00:52:31.433: INFO: stdout: "update-demo-nautilus-9dnqd update-demo-nautilus-9qz6h " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 24 00:52:36.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:36.530: INFO: stderr: "" Mar 24 00:52:36.530: INFO: stdout: "update-demo-nautilus-9dnqd " Mar 24 00:52:36.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:36.628: INFO: stderr: "" Mar 24 00:52:36.628: INFO: stdout: "true" Mar 24 00:52:36.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:36.722: INFO: stderr: "" Mar 24 00:52:36.722: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:52:36.722: INFO: validating pod update-demo-nautilus-9dnqd Mar 24 00:52:36.725: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:52:36.725: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:52:36.725: INFO: update-demo-nautilus-9dnqd is verified up and running STEP: scaling up the replication controller Mar 24 00:52:36.727: INFO: scanned /root for discovery docs: Mar 24 00:52:36.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7973' Mar 24 00:52:37.844: INFO: stderr: "" Mar 24 00:52:37.844: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 24 00:52:37.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:37.934: INFO: stderr: "" Mar 24 00:52:37.934: INFO: stdout: "update-demo-nautilus-6tklx update-demo-nautilus-9dnqd " Mar 24 00:52:37.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tklx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:38.018: INFO: stderr: "" Mar 24 00:52:38.019: INFO: stdout: "" Mar 24 00:52:38.019: INFO: update-demo-nautilus-6tklx is created but not running Mar 24 00:52:43.019: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7973' Mar 24 00:52:43.113: INFO: stderr: "" Mar 24 00:52:43.113: INFO: stdout: "update-demo-nautilus-6tklx update-demo-nautilus-9dnqd " Mar 24 00:52:43.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tklx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:43.231: INFO: stderr: "" Mar 24 00:52:43.231: INFO: stdout: "true" Mar 24 00:52:43.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6tklx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:43.340: INFO: stderr: "" Mar 24 00:52:43.340: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:52:43.340: INFO: validating pod update-demo-nautilus-6tklx Mar 24 00:52:43.344: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:52:43.344: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:52:43.344: INFO: update-demo-nautilus-6tklx is verified up and running Mar 24 00:52:43.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:43.444: INFO: stderr: "" Mar 24 00:52:43.444: INFO: stdout: "true" Mar 24 00:52:43.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9dnqd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7973' Mar 24 00:52:43.542: INFO: stderr: "" Mar 24 00:52:43.542: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 24 00:52:43.542: INFO: validating pod update-demo-nautilus-9dnqd Mar 24 00:52:43.545: INFO: got data: { "image": "nautilus.jpg" } Mar 24 00:52:43.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 24 00:52:43.545: INFO: update-demo-nautilus-9dnqd is verified up and running STEP: using delete to clean up resources Mar 24 00:52:43.546: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7973' Mar 24 00:52:43.652: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 24 00:52:43.652: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 24 00:52:43.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7973' Mar 24 00:52:43.754: INFO: stderr: "No resources found in kubectl-7973 namespace.\n" Mar 24 00:52:43.755: INFO: stdout: "" Mar 24 00:52:43.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7973 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 00:52:43.864: INFO: stderr: "" Mar 24 00:52:43.864: INFO: stdout: "update-demo-nautilus-6tklx\nupdate-demo-nautilus-9dnqd\n" Mar 24 00:52:44.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7973' Mar 24 00:52:44.461: INFO: stderr: "No resources found in kubectl-7973 namespace.\n" Mar 24 00:52:44.461: INFO: stdout: "" Mar 24 00:52:44.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7973 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 24 00:52:44.543: INFO: stderr: "" Mar 24 00:52:44.543: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:44.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7973" for this suite. • [SLOW TEST:20.390 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":263,"skipped":4464,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:44.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-64f59efa-22b2-45c8-9d78-97597e616b83 STEP: Creating a pod to test consume configMaps Mar 24 00:52:44.831: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af" in namespace "projected-1217" to be "Succeeded or Failed" Mar 24 00:52:44.872: INFO: Pod "pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af": Phase="Pending", Reason="", readiness=false. Elapsed: 40.853779ms Mar 24 00:52:46.880: INFO: Pod "pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049010802s Mar 24 00:52:48.883: INFO: Pod "pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052783092s STEP: Saw pod success Mar 24 00:52:48.884: INFO: Pod "pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af" satisfied condition "Succeeded or Failed" Mar 24 00:52:48.886: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af container projected-configmap-volume-test: STEP: delete the pod Mar 24 00:52:48.941: INFO: Waiting for pod pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af to disappear Mar 24 00:52:48.946: INFO: Pod pod-projected-configmaps-71238e91-d6b3-4a47-a3d9-2051c4f495af no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:48.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1217" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4473,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:48.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 24 00:52:49.004: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 24 00:52:51.895: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5949 create -f -' Mar 24 00:52:55.052: INFO: stderr: "" Mar 24 00:52:55.052: INFO: stdout: "e2e-test-crd-publish-openapi-6719-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 24 00:52:55.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5949 delete e2e-test-crd-publish-openapi-6719-crds test-cr' Mar 24 00:52:55.166: INFO: stderr: "" Mar 24 00:52:55.166: INFO: stdout: "e2e-test-crd-publish-openapi-6719-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 24 00:52:55.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5949 apply -f -' Mar 24 00:52:55.433: INFO: stderr: "" Mar 24 00:52:55.433: INFO: stdout: "e2e-test-crd-publish-openapi-6719-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 24 00:52:55.433: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5949 delete e2e-test-crd-publish-openapi-6719-crds test-cr' Mar 24 00:52:55.538: INFO: stderr: "" Mar 24 00:52:55.538: INFO: stdout: "e2e-test-crd-publish-openapi-6719-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 24 00:52:55.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6719-crds' Mar 24 00:52:55.766: INFO: stderr: "" Mar 24 00:52:55.766: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6719-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:52:57.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5949" for this suite. • [SLOW TEST:8.699 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":265,"skipped":4476,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:52:57.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 24 00:52:57.728: INFO: Waiting up to 5m0s for pod "pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60" in namespace "emptydir-4656" to be "Succeeded or Failed" Mar 24 00:52:57.732: INFO: Pod "pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.401269ms Mar 24 00:52:59.736: INFO: Pod "pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007542713s Mar 24 00:53:01.739: INFO: Pod "pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011105854s STEP: Saw pod success Mar 24 00:53:01.739: INFO: Pod "pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60" satisfied condition "Succeeded or Failed" Mar 24 00:53:01.742: INFO: Trying to get logs from node latest-worker2 pod pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60 container test-container: STEP: delete the pod Mar 24 00:53:01.757: INFO: Waiting for pod pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60 to disappear Mar 24 00:53:01.761: INFO: Pod pod-a87457e4-542d-4a4f-8fde-9c54ebaa3f60 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:53:01.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4656" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4480,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:53:01.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 24 00:53:01.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5" in namespace "downward-api-6431" to be "Succeeded or Failed" Mar 24 00:53:01.863: INFO: Pod "downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.363807ms Mar 24 00:53:03.867: INFO: Pod "downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043236361s Mar 24 00:53:05.871: INFO: Pod "downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047420564s STEP: Saw pod success Mar 24 00:53:05.871: INFO: Pod "downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5" satisfied condition "Succeeded or Failed" Mar 24 00:53:05.874: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5 container client-container: STEP: delete the pod Mar 24 00:53:05.922: INFO: Waiting for pod downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5 to disappear Mar 24 00:53:05.948: INFO: Pod downwardapi-volume-e563f312-a2e1-4590-9a10-8c8422d937a5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:53:05.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6431" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:53:05.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:53:36.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9422" for this suite. • [SLOW TEST:30.879 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4630,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:53:36.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 24 00:53:36.975: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 24 00:53:36.984: INFO: Waiting for terminating namespaces to be deleted... Mar 24 00:53:36.986: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 24 00:53:36.991: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:53:36.991: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:53:36.991: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:53:36.991: INFO: Container kube-proxy ready: true, restart count 0 Mar 24 00:53:36.991: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 24 00:53:36.996: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:53:36.996: INFO: Container kindnet-cni ready: true, restart count 0 Mar 24 00:53:36.996: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 24 00:53:36.996: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ee3e9054-ace9-4730-9f8a-5b0b5fbcaa05 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-ee3e9054-ace9-4730-9f8a-5b0b5fbcaa05 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ee3e9054-ace9-4730-9f8a-5b0b5fbcaa05 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:53:45.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4760" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.280 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":269,"skipped":4636,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:53:45.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-33f824f4-1cdd-463f-9131-b281a305f947 STEP: Creating configMap with name cm-test-opt-upd-a7564213-95fc-49e3-8140-875b1226e00c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-33f824f4-1cdd-463f-9131-b281a305f947 STEP: Updating configmap cm-test-opt-upd-a7564213-95fc-49e3-8140-875b1226e00c STEP: Creating configMap with name cm-test-opt-create-00109c12-3f3a-4ea4-9477-a439c58f5495 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:13.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7391" for this suite. • [SLOW TEST:88.604 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:55:13.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:13.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4817" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":271,"skipped":4668,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:55:13.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-wvqd STEP: Creating a pod to test atomic-volume-subpath Mar 24 00:55:13.948: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wvqd" in namespace "subpath-5173" to be "Succeeded or Failed" Mar 24 00:55:13.956: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.900755ms Mar 24 00:55:15.960: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011684025s Mar 24 00:55:17.964: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 4.016136875s Mar 24 00:55:20.163: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 6.215494602s Mar 24 00:55:22.168: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 8.219688675s Mar 24 00:55:24.172: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 10.224340119s Mar 24 00:55:26.176: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 12.228506875s Mar 24 00:55:28.181: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 14.23283963s Mar 24 00:55:30.185: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 16.237254648s Mar 24 00:55:32.189: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 18.241422806s Mar 24 00:55:34.193: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 20.245405789s Mar 24 00:55:36.198: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Running", Reason="", readiness=true. Elapsed: 22.249797001s Mar 24 00:55:38.201: INFO: Pod "pod-subpath-test-configmap-wvqd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.253124419s STEP: Saw pod success Mar 24 00:55:38.201: INFO: Pod "pod-subpath-test-configmap-wvqd" satisfied condition "Succeeded or Failed" Mar 24 00:55:38.203: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-wvqd container test-container-subpath-configmap-wvqd: STEP: delete the pod Mar 24 00:55:38.227: INFO: Waiting for pod pod-subpath-test-configmap-wvqd to disappear Mar 24 00:55:38.231: INFO: Pod pod-subpath-test-configmap-wvqd no longer exists STEP: Deleting pod pod-subpath-test-configmap-wvqd Mar 24 00:55:38.231: INFO: Deleting pod "pod-subpath-test-configmap-wvqd" in namespace "subpath-5173" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:38.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5173" for this suite. • [SLOW TEST:24.381 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":272,"skipped":4690,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:55:38.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 24 00:55:42.330: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:42.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3789" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4701,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:55:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-818eca83-422d-42ec-8a4a-9499edb1d4ab STEP: Creating a pod to test consume secrets Mar 24 00:55:42.449: INFO: Waiting up to 5m0s for pod "pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9" in namespace "secrets-8441" to be "Succeeded or Failed" Mar 24 00:55:42.453: INFO: Pod "pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162271ms Mar 24 00:55:44.457: INFO: Pod "pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008047744s Mar 24 00:55:46.461: INFO: Pod "pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012099365s STEP: Saw pod success Mar 24 00:55:46.461: INFO: Pod "pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9" satisfied condition "Succeeded or Failed" Mar 24 00:55:46.463: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9 container secret-volume-test: STEP: delete the pod Mar 24 00:55:46.486: INFO: Waiting for pod pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9 to disappear Mar 24 00:55:46.490: INFO: Pod pod-secrets-0ed42eb6-02f4-4165-ae86-0fb544616db9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8441" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4706,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 24 00:55:46.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 24 00:55:46.637: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1422 /api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-resource-version cc6c8685-7a7c-45b2-aaa3-1aec609bdf01 2294634 0 2020-03-24 00:55:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 24 00:55:46.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1422 /api/v1/namespaces/watch-1422/configmaps/e2e-watch-test-resource-version cc6c8685-7a7c-45b2-aaa3-1aec609bdf01 2294635 0 2020-03-24 00:55:46 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 24 00:55:46.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1422" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":275,"skipped":4707,"failed":0} SSSSSSSSSSMar 24 00:55:46.646: INFO: Running AfterSuite actions on all nodes Mar 24 00:55:46.646: INFO: Running AfterSuite actions on node 1 Mar 24 00:55:46.646: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4773.613 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS