I1223 01:50:44.953486 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1223 01:50:44.953679 6 e2e.go:109] Starting e2e run "70cea070-a5b4-4bdd-8919-5a7a8b6a0ca0" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1608688243 - Will randomize all specs Will run 278 of 4846 specs Dec 23 01:50:45.012: INFO: >>> kubeConfig: /root/.kube/config Dec 23 01:50:45.016: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 23 01:50:45.033: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 23 01:50:45.062: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 23 01:50:45.062: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 23 01:50:45.062: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 23 01:50:45.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Dec 23 01:50:45.073: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 23 01:50:45.073: INFO: e2e test version: v1.17.16-rc.0 Dec 23 01:50:45.074: INFO: kube-apiserver version: v1.17.5 Dec 23 01:50:45.074: INFO: >>> kubeConfig: /root/.kube/config Dec 23 01:50:45.078: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:50:45.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets Dec 23 01:50:45.227: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c0259d1e-5dc6-4d2b-88dc-76a647afc3dd STEP: Creating a pod to test consume secrets Dec 23 01:50:45.278: INFO: Waiting up to 5m0s for pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9" in namespace "secrets-7295" to be "success or failure" Dec 23 01:50:45.294: INFO: Pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.836454ms Dec 23 01:50:47.298: INFO: Pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019568701s Dec 23 01:50:49.302: INFO: Pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.023928415s Dec 23 01:50:51.306: INFO: Pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027983996s STEP: Saw pod success Dec 23 01:50:51.307: INFO: Pod "pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9" satisfied condition "success or failure" Dec 23 01:50:51.309: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9 container secret-volume-test: STEP: delete the pod Dec 23 01:50:51.344: INFO: Waiting for pod pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9 to disappear Dec 23 01:50:51.364: INFO: Pod pod-secrets-f2a6298c-3266-4f79-8756-f52913c376f9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:50:51.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7295" for this suite. • [SLOW TEST:6.292 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":29,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:50:51.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Dec 23 01:50:51.423: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:50:59.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7165" for this suite. • [SLOW TEST:7.679 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":2,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:50:59.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:51:03.481: INFO: Waiting up to 5m0s for pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19" in namespace "pods-1199" to be "success or failure" Dec 23 01:51:03.519: INFO: Pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19": Phase="Pending", Reason="", readiness=false. Elapsed: 37.317094ms Dec 23 01:51:05.522: INFO: Pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040942314s Dec 23 01:51:07.612: INFO: Pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19": Phase="Running", Reason="", readiness=true. Elapsed: 4.130283952s Dec 23 01:51:09.662: INFO: Pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180880596s STEP: Saw pod success Dec 23 01:51:09.662: INFO: Pod "client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19" satisfied condition "success or failure" Dec 23 01:51:09.665: INFO: Trying to get logs from node jerma-worker pod client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19 container env3cont: STEP: delete the pod Dec 23 01:51:09.942: INFO: Waiting for pod client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19 to disappear Dec 23 01:51:09.945: INFO: Pod client-envvars-25586021-9bd8-4a8d-aa39-07892b546a19 no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:09.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1199" for this suite. • [SLOW TEST:10.900 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":116,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:09.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Dec 23 01:51:10.830: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:28.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3252" for this suite. • [SLOW TEST:18.321 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":4,"skipped":120,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:28.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 01:51:28.405: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e" in namespace "projected-131" to be "success or failure" Dec 23 01:51:28.412: INFO: Pod "downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268636ms Dec 23 01:51:30.416: INFO: Pod "downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010541816s Dec 23 01:51:32.421: INFO: Pod "downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015504347s STEP: Saw pod success Dec 23 01:51:32.421: INFO: Pod "downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e" satisfied condition "success or failure" Dec 23 01:51:32.424: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e container client-container: STEP: delete the pod Dec 23 01:51:32.626: INFO: Waiting for pod downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e to disappear Dec 23 01:51:32.656: INFO: Pod downwardapi-volume-59a09921-7a7d-400e-8763-ced57c7e889e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:32.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-131" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":132,"failed":0} SS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:32.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-eb78563a-326b-489c-a9a3-bbd4bfce3eee STEP: Creating secret with name secret-projected-all-test-volume-ee0ec2d4-19d7-4888-a207-930156bd8d85 STEP: Creating a pod to test Check all projections for projected volume plugin Dec 23 01:51:32.822: INFO: Waiting up to 5m0s for pod "projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174" in namespace "projected-6943" to be "success or failure" Dec 23 01:51:32.824: INFO: Pod "projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410861ms Dec 23 01:51:34.828: INFO: Pod "projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006500152s Dec 23 01:51:36.833: INFO: Pod "projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011645348s STEP: Saw pod success Dec 23 01:51:36.833: INFO: Pod "projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174" satisfied condition "success or failure" Dec 23 01:51:36.835: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174 container projected-all-volume-test: STEP: delete the pod Dec 23 01:51:36.873: INFO: Waiting for pod projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174 to disappear Dec 23 01:51:36.947: INFO: Pod projected-volume-415b8236-0e59-45a2-8632-2ba7c6e87174 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:36.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6943" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":6,"skipped":134,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:37.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Dec 23 01:51:37.353: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6395" to be "success or failure" Dec 23 01:51:37.374: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.545161ms Dec 23 01:51:39.378: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024245895s Dec 23 01:51:41.457: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103392969s Dec 23 01:51:43.462: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.108354144s STEP: Saw pod success Dec 23 01:51:43.462: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Dec 23 01:51:43.465: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Dec 23 01:51:43.534: INFO: Waiting for pod pod-host-path-test to disappear Dec 23 01:51:43.537: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:43.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6395" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] HostPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":147,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:43.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 01:51:44.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 01:51:46.105: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285104, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285104, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285104, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285104, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 01:51:49.139: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:49.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8293" for this suite. STEP: Destroying namespace "webhook-8293-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":8,"skipped":163,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:49.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 01:51:49.427: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad" in namespace "downward-api-3307" to be "success or failure" Dec 23 01:51:49.448: INFO: Pod "downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad": Phase="Pending", Reason="", readiness=false. Elapsed: 20.971146ms Dec 23 01:51:51.451: INFO: Pod "downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024342091s Dec 23 01:51:53.455: INFO: Pod "downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028491122s STEP: Saw pod success Dec 23 01:51:53.455: INFO: Pod "downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad" satisfied condition "success or failure" Dec 23 01:51:53.458: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad container client-container: STEP: delete the pod Dec 23 01:51:53.541: INFO: Waiting for pod downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad to disappear Dec 23 01:51:53.544: INFO: Pod downwardapi-volume-ab7f6068-1acc-40f4-9e5c-0e2c88b8ffad no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:51:53.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3307" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":166,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:51:53.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Dec 23 01:51:53.732: INFO: namespace kubectl-2364 Dec 23 01:51:53.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2364' Dec 23 01:51:56.494: INFO: stderr: "" Dec 23 01:51:56.494: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Dec 23 01:51:57.498: INFO: Selector matched 1 pods for map[app:agnhost] Dec 23 01:51:57.498: INFO: Found 0 / 1 Dec 23 01:51:58.498: INFO: Selector matched 1 pods for map[app:agnhost] Dec 23 01:51:58.499: INFO: Found 0 / 1 Dec 23 01:51:59.499: INFO: Selector matched 1 pods for map[app:agnhost] Dec 23 01:51:59.499: INFO: Found 0 / 1 Dec 23 01:52:00.498: INFO: Selector matched 1 pods for map[app:agnhost] Dec 23 01:52:00.498: INFO: Found 1 / 1 Dec 23 01:52:00.498: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Dec 23 01:52:00.501: INFO: Selector matched 1 pods for map[app:agnhost] Dec 23 01:52:00.501: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Dec 23 01:52:00.501: INFO: wait on agnhost-master startup in kubectl-2364 Dec 23 01:52:00.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-hmjzc agnhost-master --namespace=kubectl-2364' Dec 23 01:52:00.605: INFO: stderr: "" Dec 23 01:52:00.605: INFO: stdout: "Paused\n" STEP: exposing RC Dec 23 01:52:00.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2364' Dec 23 01:52:00.743: INFO: stderr: "" Dec 23 01:52:00.743: INFO: stdout: "service/rm2 exposed\n" Dec 23 01:52:00.749: INFO: Service rm2 in namespace kubectl-2364 found. STEP: exposing service Dec 23 01:52:03.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2364' Dec 23 01:52:03.395: INFO: stderr: "" Dec 23 01:52:03.395: INFO: stdout: "service/rm3 exposed\n" Dec 23 01:52:03.466: INFO: Service rm3 in namespace kubectl-2364 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:05.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2364" for this suite. • [SLOW TEST:11.927 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":10,"skipped":166,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:05.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:52:05.606: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8f985ee5-0fea-489f-a58d-6cade3bd434f" in namespace "security-context-test-3986" to be "success or failure" Dec 23 01:52:05.609: INFO: Pod "busybox-user-65534-8f985ee5-0fea-489f-a58d-6cade3bd434f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.594019ms Dec 23 01:52:07.636: INFO: Pod "busybox-user-65534-8f985ee5-0fea-489f-a58d-6cade3bd434f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029707914s Dec 23 01:52:09.640: INFO: Pod "busybox-user-65534-8f985ee5-0fea-489f-a58d-6cade3bd434f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0330684s Dec 23 01:52:09.640: INFO: Pod "busybox-user-65534-8f985ee5-0fea-489f-a58d-6cade3bd434f" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:09.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3986" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:09.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-963434fe-8dcc-4b82-80de-88bc2499d48f STEP: Creating configMap with name cm-test-opt-upd-48efe95d-37bc-4c0f-aeb4-8dd73fc12a3a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-963434fe-8dcc-4b82-80de-88bc2499d48f STEP: Updating configmap cm-test-opt-upd-48efe95d-37bc-4c0f-aeb4-8dd73fc12a3a STEP: Creating configMap with name cm-test-opt-create-c1a9e167-e7c0-4488-9049-417d22284821 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:21.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6916" for this suite. • [SLOW TEST:12.255 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:21.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ddbf44ed-bff3-43ab-8662-af1175217f58 STEP: Creating a pod to test consume secrets Dec 23 01:52:22.136: INFO: Waiting up to 5m0s for pod "pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac" in namespace "secrets-3743" to be "success or failure" Dec 23 01:52:22.146: INFO: Pod "pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.729833ms Dec 23 01:52:24.174: INFO: Pod "pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037983046s Dec 23 01:52:26.178: INFO: Pod "pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041918155s STEP: Saw pod success Dec 23 01:52:26.178: INFO: Pod "pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac" satisfied condition "success or failure" Dec 23 01:52:26.181: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac container secret-volume-test: STEP: delete the pod Dec 23 01:52:26.210: INFO: Waiting for pod pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac to disappear Dec 23 01:52:26.215: INFO: Pod pod-secrets-b970bdd8-7665-4a96-b86f-574732335cac no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:26.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3743" for this suite. STEP: Destroying namespace "secret-namespace-9945" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":222,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:26.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782 Dec 23 01:52:26.551: INFO: Pod name my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782: Found 0 pods out of 1 Dec 23 01:52:31.554: INFO: Pod name my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782: Found 1 pods out of 1 Dec 23 01:52:31.554: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782" are running Dec 23 01:52:31.564: INFO: Pod "my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782-kg26b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 01:52:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 01:52:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 01:52:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 01:52:26 +0000 UTC Reason: Message:}]) Dec 23 01:52:31.564: INFO: Trying to dial the pod Dec 23 01:52:36.574: INFO: Controller my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782: Got expected result from replica 1 [my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782-kg26b]: "my-hostname-basic-c8840414-88d5-4e5d-8c6a-30043d75c782-kg26b", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:36.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9470" for this suite. • [SLOW TEST:10.328 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":14,"skipped":232,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:36.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1223 01:52:37.722900 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 01:52:37.722: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:37.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6740" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":15,"skipped":254,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:37.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:45.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2745" for this suite. • [SLOW TEST:8.089 seconds] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":289,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:45.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-eccbf56f-a7b4-475e-969a-116175d86f0c STEP: Creating a pod to test consume secrets Dec 23 01:52:45.943: INFO: Waiting up to 5m0s for pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36" in namespace "secrets-41" to be "success or failure" Dec 23 01:52:45.947: INFO: Pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34268ms Dec 23 01:52:48.057: INFO: Pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114020187s Dec 23 01:52:50.061: INFO: Pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36": Phase="Running", Reason="", readiness=true. Elapsed: 4.118630209s Dec 23 01:52:52.092: INFO: Pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.149401952s STEP: Saw pod success Dec 23 01:52:52.092: INFO: Pod "pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36" satisfied condition "success or failure" Dec 23 01:52:52.095: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36 container secret-volume-test: STEP: delete the pod Dec 23 01:52:52.137: INFO: Waiting for pod pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36 to disappear Dec 23 01:52:52.151: INFO: Pod pod-secrets-dc49d3fa-4c25-4581-b63b-255b53b4fc36 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:52:52.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-41" for this suite. • [SLOW TEST:6.336 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":296,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:52:52.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6051 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6051 STEP: creating replication controller externalsvc in namespace services-6051 I1223 01:52:52.362370 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6051, replica count: 2 I1223 01:52:55.412795 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 01:52:58.413030 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 01:53:01.413225 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Dec 23 01:53:01.698: INFO: Creating new exec pod Dec 23 01:53:05.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6051 execpodlq5jx -- /bin/sh -x -c nslookup clusterip-service' Dec 23 01:53:06.268: INFO: stderr: "I1223 01:53:06.102323 126 log.go:172] (0xc00010a370) (0xc000773400) Create stream\nI1223 01:53:06.102382 126 log.go:172] (0xc00010a370) (0xc000773400) Stream added, broadcasting: 1\nI1223 01:53:06.104822 126 log.go:172] (0xc00010a370) Reply frame received for 1\nI1223 01:53:06.104987 126 log.go:172] (0xc00010a370) (0xc00092e000) Create stream\nI1223 01:53:06.105024 126 log.go:172] (0xc00010a370) (0xc00092e000) Stream added, broadcasting: 3\nI1223 01:53:06.106049 126 log.go:172] (0xc00010a370) Reply frame received for 3\nI1223 01:53:06.106075 126 log.go:172] (0xc00010a370) (0xc0009a0000) Create stream\nI1223 01:53:06.106086 126 log.go:172] (0xc00010a370) (0xc0009a0000) Stream added, broadcasting: 5\nI1223 01:53:06.106974 126 log.go:172] (0xc00010a370) Reply frame received for 5\nI1223 01:53:06.210807 126 log.go:172] (0xc00010a370) Data frame received for 5\nI1223 01:53:06.210847 126 log.go:172] (0xc0009a0000) (5) Data frame handling\nI1223 01:53:06.210871 126 log.go:172] (0xc0009a0000) (5) Data frame sent\n+ nslookup clusterip-service\nI1223 01:53:06.255797 126 log.go:172] (0xc00010a370) Data frame received for 3\nI1223 01:53:06.255843 126 log.go:172] (0xc00092e000) (3) Data frame handling\nI1223 01:53:06.255880 126 log.go:172] (0xc00092e000) (3) Data frame sent\nI1223 01:53:06.256599 126 log.go:172] (0xc00010a370) Data frame received for 3\nI1223 01:53:06.256616 126 log.go:172] (0xc00092e000) (3) Data frame handling\nI1223 01:53:06.256631 126 log.go:172] (0xc00092e000) (3) Data frame sent\nI1223 01:53:06.257139 126 log.go:172] (0xc00010a370) Data frame received for 3\nI1223 01:53:06.257161 126 log.go:172] (0xc00092e000) (3) Data frame handling\nI1223 01:53:06.257416 126 log.go:172] (0xc00010a370) Data frame received for 5\nI1223 01:53:06.257442 126 log.go:172] (0xc0009a0000) (5) Data frame handling\nI1223 01:53:06.259453 126 log.go:172] (0xc00010a370) Data frame received for 1\nI1223 01:53:06.259473 126 log.go:172] (0xc000773400) (1) Data frame handling\nI1223 01:53:06.259492 126 log.go:172] (0xc000773400) (1) Data frame sent\nI1223 01:53:06.259507 126 log.go:172] (0xc00010a370) (0xc000773400) Stream removed, broadcasting: 1\nI1223 01:53:06.259524 126 log.go:172] (0xc00010a370) Go away received\nI1223 01:53:06.259999 126 log.go:172] (0xc00010a370) (0xc000773400) Stream removed, broadcasting: 1\nI1223 01:53:06.260025 126 log.go:172] (0xc00010a370) (0xc00092e000) Stream removed, broadcasting: 3\nI1223 01:53:06.260047 126 log.go:172] (0xc00010a370) (0xc0009a0000) Stream removed, broadcasting: 5\n" Dec 23 01:53:06.268: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6051.svc.cluster.local\tcanonical name = externalsvc.services-6051.svc.cluster.local.\nName:\texternalsvc.services-6051.svc.cluster.local\nAddress: 10.104.239.38\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6051, will wait for the garbage collector to delete the pods Dec 23 01:53:06.419: INFO: Deleting ReplicationController externalsvc took: 6.654299ms Dec 23 01:53:06.819: INFO: Terminating ReplicationController externalsvc pods took: 400.302755ms Dec 23 01:53:14.568: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:53:14.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6051" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:22.505 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":18,"skipped":308,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:53:14.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Dec 23 01:53:14.736: INFO: Waiting up to 5m0s for pod "downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98" in namespace "downward-api-9102" to be "success or failure" Dec 23 01:53:14.741: INFO: Pod "downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98": Phase="Pending", Reason="", readiness=false. Elapsed: 5.354143ms Dec 23 01:53:16.745: INFO: Pod "downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009574593s Dec 23 01:53:18.750: INFO: Pod "downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014239072s STEP: Saw pod success Dec 23 01:53:18.750: INFO: Pod "downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98" satisfied condition "success or failure" Dec 23 01:53:18.753: INFO: Trying to get logs from node jerma-worker2 pod downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98 container dapi-container: STEP: delete the pod Dec 23 01:53:18.789: INFO: Waiting for pod downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98 to disappear Dec 23 01:53:18.812: INFO: Pod downward-api-30440cac-7c70-4e95-9a91-25c0a3970d98 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:53:18.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9102" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":312,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:53:18.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1223 01:53:30.807194 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 01:53:30.807: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:53:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1313" for this suite. • [SLOW TEST:11.994 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":20,"skipped":323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:53:30.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:53:35.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2866" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":362,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:53:35.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-c5aec082-aa5e-4bc6-bd0a-5a4f0c656ffc in namespace container-probe-8870 Dec 23 01:53:42.196: INFO: Started pod liveness-c5aec082-aa5e-4bc6-bd0a-5a4f0c656ffc in namespace container-probe-8870 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 01:53:42.201: INFO: Initial restart count of pod liveness-c5aec082-aa5e-4bc6-bd0a-5a4f0c656ffc is 0 Dec 23 01:54:00.267: INFO: Restart count of pod container-probe-8870/liveness-c5aec082-aa5e-4bc6-bd0a-5a4f0c656ffc is now 1 (18.065308373s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:00.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8870" for this suite. • [SLOW TEST:24.536 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":371,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:00.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:54:00.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Dec 23 01:54:00.719: INFO: stderr: "" Dec 23 01:54:00.719: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17+\", GitVersion:\"v1.17.16-rc.0\", GitCommit:\"737e2c461a2999fa242d39e77b9252d0eee7167e\", GitTreeState:\"clean\", BuildDate:\"2020-12-09T11:14:02Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.5\", GitCommit:\"e0fccafd69541e3750d460ba0f9743b90336f24f\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:11:15Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:00.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4970" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":23,"skipped":376,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:00.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Dec 23 01:54:00.790: INFO: Waiting up to 5m0s for pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6" in namespace "emptydir-5501" to be "success or failure" Dec 23 01:54:00.794: INFO: Pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.57726ms Dec 23 01:54:02.819: INFO: Pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028757148s Dec 23 01:54:04.825: INFO: Pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034279947s Dec 23 01:54:06.828: INFO: Pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037112678s STEP: Saw pod success Dec 23 01:54:06.828: INFO: Pod "pod-bd77d8b9-877e-43bf-834c-2be4572de8a6" satisfied condition "success or failure" Dec 23 01:54:06.829: INFO: Trying to get logs from node jerma-worker pod pod-bd77d8b9-877e-43bf-834c-2be4572de8a6 container test-container: STEP: delete the pod Dec 23 01:54:06.956: INFO: Waiting for pod pod-bd77d8b9-877e-43bf-834c-2be4572de8a6 to disappear Dec 23 01:54:06.968: INFO: Pod pod-bd77d8b9-877e-43bf-834c-2be4572de8a6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:06.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5501" for this suite. • [SLOW TEST:6.240 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:06.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 23 01:54:15.333: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 01:54:15.465: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 01:54:17.465: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 01:54:17.469: INFO: Pod pod-with-poststart-exec-hook still exists Dec 23 01:54:19.465: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 23 01:54:19.469: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:19.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6963" for this suite. • [SLOW TEST:12.498 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:19.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-8d0dd2a1-f4f6-4967-80ba-26fa6cbb49c5 STEP: Creating a pod to test consume secrets Dec 23 01:54:19.612: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d" in namespace "projected-575" to be "success or failure" Dec 23 01:54:19.616: INFO: Pod "pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.862355ms Dec 23 01:54:21.680: INFO: Pod "pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068362784s Dec 23 01:54:23.688: INFO: Pod "pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075959242s STEP: Saw pod success Dec 23 01:54:23.688: INFO: Pod "pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d" satisfied condition "success or failure" Dec 23 01:54:23.709: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d container projected-secret-volume-test: STEP: delete the pod Dec 23 01:54:23.785: INFO: Waiting for pod pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d to disappear Dec 23 01:54:23.814: INFO: Pod pod-projected-secrets-18a1d05c-1a27-4720-8bb4-0f4be1bf028d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:23.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-575" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:23.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9542.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9542.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9542.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9542.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9542.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9542.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 01:54:30.013: INFO: DNS probes using dns-9542/dns-test-3e217cea-a6b3-43d6-925c-0459fc73078e succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9542" for this suite. • [SLOW TEST:6.343 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":27,"skipped":480,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:30.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 23 01:54:35.676: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:35.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2353" for this suite. • [SLOW TEST:5.539 seconds] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":504,"failed":0} [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:35.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-kl8d4 in namespace proxy-9562 I1223 01:54:35.830037 6 runners.go:189] Created replication controller with name: proxy-service-kl8d4, namespace: proxy-9562, replica count: 1 I1223 01:54:36.880493 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 01:54:37.880738 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 01:54:38.881076 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:39.881334 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:40.881578 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:41.881824 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:42.882056 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:43.882405 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:44.882664 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:45.882912 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1223 01:54:46.883123 6 runners.go:189] proxy-service-kl8d4 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 23 01:54:46.886: INFO: setup took 11.130281427s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Dec 23 01:54:46.891: INFO: (0) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.519117ms) Dec 23 01:54:46.892: INFO: (0) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 6.604136ms) Dec 23 01:54:46.893: INFO: (0) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 7.232633ms) Dec 23 01:54:46.893: INFO: (0) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 7.287569ms) Dec 23 01:54:46.894: INFO: (0) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 8.235105ms) Dec 23 01:54:46.894: INFO: (0) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 8.231175ms) Dec 23 01:54:46.895: INFO: (0) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 9.593782ms) Dec 23 01:54:46.896: INFO: (0) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 9.614973ms) Dec 23 01:54:46.896: INFO: (0) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 9.904924ms) Dec 23 01:54:46.896: INFO: (0) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 10.147848ms) Dec 23 01:54:46.900: INFO: (0) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 14.258875ms) Dec 23 01:54:46.900: INFO: (0) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 14.113129ms) Dec 23 01:54:46.901: INFO: (0) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 23.858722ms) Dec 23 01:54:46.927: INFO: (1) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 24.54637ms) Dec 23 01:54:46.927: INFO: (1) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 24.483974ms) Dec 23 01:54:46.927: INFO: (1) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 24.577117ms) Dec 23 01:54:46.927: INFO: (1) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 24.587059ms) Dec 23 01:54:46.927: INFO: (1) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 24.635288ms) Dec 23 01:54:46.928: INFO: (1) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 27.461969ms) Dec 23 01:54:46.934: INFO: (2) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: ... (200; 4.082969ms) Dec 23 01:54:46.939: INFO: (2) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 8.895989ms) Dec 23 01:54:46.939: INFO: (2) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 8.851696ms) Dec 23 01:54:46.939: INFO: (2) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 9.049275ms) Dec 23 01:54:46.939: INFO: (2) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 9.039065ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 9.309023ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 9.422338ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 9.378479ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 9.547753ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 9.779332ms) Dec 23 01:54:46.940: INFO: (2) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 9.850156ms) Dec 23 01:54:46.942: INFO: (2) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 11.358019ms) Dec 23 01:54:46.942: INFO: (2) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 11.331925ms) Dec 23 01:54:46.942: INFO: (2) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 11.363703ms) Dec 23 01:54:46.946: INFO: (3) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 3.823151ms) Dec 23 01:54:46.946: INFO: (3) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 3.662564ms) Dec 23 01:54:46.946: INFO: (3) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 3.739596ms) Dec 23 01:54:46.946: INFO: (3) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 5.1653ms) Dec 23 01:54:46.947: INFO: (3) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.355368ms) Dec 23 01:54:46.947: INFO: (3) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 5.437468ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 5.67554ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 5.683021ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 5.668846ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 5.666607ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 5.903786ms) Dec 23 01:54:46.948: INFO: (3) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 6.052895ms) Dec 23 01:54:46.949: INFO: (3) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 6.971481ms) Dec 23 01:54:46.949: INFO: (3) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 7.010968ms) Dec 23 01:54:46.953: INFO: (4) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 5.926075ms) Dec 23 01:54:46.955: INFO: (4) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.897227ms) Dec 23 01:54:46.955: INFO: (4) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 5.921787ms) Dec 23 01:54:46.955: INFO: (4) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 5.969507ms) Dec 23 01:54:46.955: INFO: (4) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 6.010785ms) Dec 23 01:54:46.958: INFO: (5) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 2.782003ms) Dec 23 01:54:46.958: INFO: (5) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 3.031726ms) Dec 23 01:54:46.958: INFO: (5) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 3.058423ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 3.664735ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 3.791552ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 3.985341ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 4.102213ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 4.105658ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.299544ms) Dec 23 01:54:46.959: INFO: (5) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 4.270872ms) Dec 23 01:54:46.960: INFO: (5) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 4.532878ms) Dec 23 01:54:46.960: INFO: (5) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 4.967835ms) Dec 23 01:54:46.964: INFO: (6) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 3.580753ms) Dec 23 01:54:46.964: INFO: (6) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 3.591579ms) Dec 23 01:54:46.964: INFO: (6) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 3.634764ms) Dec 23 01:54:46.964: INFO: (6) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 3.617324ms) Dec 23 01:54:46.966: INFO: (6) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 5.345769ms) Dec 23 01:54:46.966: INFO: (6) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 6.753241ms) Dec 23 01:54:46.967: INFO: (6) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 6.82942ms) Dec 23 01:54:46.968: INFO: (6) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 7.339257ms) Dec 23 01:54:46.968: INFO: (6) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 7.536207ms) Dec 23 01:54:46.968: INFO: (6) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 7.512004ms) Dec 23 01:54:46.968: INFO: (6) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 7.933238ms) Dec 23 01:54:46.972: INFO: (7) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 3.67089ms) Dec 23 01:54:46.972: INFO: (7) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 3.800241ms) Dec 23 01:54:46.973: INFO: (7) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 4.922656ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 5.310086ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 5.334059ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.421453ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 5.352313ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: ... (200; 5.992664ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 5.778562ms) Dec 23 01:54:46.974: INFO: (7) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 6.02761ms) Dec 23 01:54:46.977: INFO: (8) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 2.319234ms) Dec 23 01:54:46.978: INFO: (8) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 3.254042ms) Dec 23 01:54:46.978: INFO: (8) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 3.859007ms) Dec 23 01:54:46.978: INFO: (8) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 3.514746ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 3.61994ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 4.247805ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 3.886647ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 3.61312ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.042868ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.201819ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 4.776966ms) Dec 23 01:54:46.979: INFO: (8) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 4.313853ms) Dec 23 01:54:46.980: INFO: (8) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: ... (200; 3.843372ms) Dec 23 01:54:46.985: INFO: (9) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 5.074774ms) Dec 23 01:54:46.985: INFO: (9) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 6.30934ms) Dec 23 01:54:46.986: INFO: (9) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 6.380871ms) Dec 23 01:54:46.986: INFO: (9) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 6.398611ms) Dec 23 01:54:46.987: INFO: (9) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 6.46573ms) Dec 23 01:54:46.994: INFO: (10) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 7.388649ms) Dec 23 01:54:46.994: INFO: (10) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: ... (200; 8.186421ms) Dec 23 01:54:46.995: INFO: (10) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 8.134948ms) Dec 23 01:54:46.995: INFO: (10) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 8.24163ms) Dec 23 01:54:46.995: INFO: (10) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 8.292162ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 9.023105ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 9.045704ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 9.109908ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 9.075093ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 9.221795ms) Dec 23 01:54:46.996: INFO: (10) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 9.191769ms) Dec 23 01:54:47.000: INFO: (11) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 3.680878ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 5.253983ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 5.222465ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.329983ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 5.33163ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 5.324152ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 5.357372ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 5.420987ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 5.516488ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 5.410836ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 5.467938ms) Dec 23 01:54:47.001: INFO: (11) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.434858ms) Dec 23 01:54:47.006: INFO: (11) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 9.878937ms) Dec 23 01:54:47.006: INFO: (11) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 9.951797ms) Dec 23 01:54:47.006: INFO: (11) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 10.035998ms) Dec 23 01:54:47.015: INFO: (12) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 9.379517ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 9.536602ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 9.645767ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 9.663658ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 9.714999ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 9.687447ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 9.927156ms) Dec 23 01:54:47.016: INFO: (12) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 10.055184ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 10.657531ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 10.672318ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 10.614546ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 10.875001ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 10.836983ms) Dec 23 01:54:47.017: INFO: (12) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: ... (200; 6.293192ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 6.247588ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 6.339598ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 6.368339ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 6.338626ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 6.406099ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 6.466745ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 6.402518ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 6.455093ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 6.716339ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 6.623109ms) Dec 23 01:54:47.024: INFO: (13) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 22.190392ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 22.295683ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 22.533782ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 22.562337ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 22.5667ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 22.577784ms) Dec 23 01:54:47.047: INFO: (14) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 4.705296ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 4.72585ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 4.82368ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.772165ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 4.887695ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.733942ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 5.18919ms) Dec 23 01:54:47.054: INFO: (15) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 3.578728ms) Dec 23 01:54:47.058: INFO: (16) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 5.198773ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.308656ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 5.397507ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 5.567892ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 5.628898ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 5.682034ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 5.655118ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 5.670675ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 5.683133ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 5.69154ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 5.692253ms) Dec 23 01:54:47.060: INFO: (16) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 5.812975ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.272116ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 4.464101ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.426788ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 4.497673ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 4.443213ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 4.481941ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 4.489511ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 4.725615ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.683985ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 4.765101ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 4.822522ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 4.7998ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 4.835996ms) Dec 23 01:54:47.065: INFO: (17) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 4.763972ms) Dec 23 01:54:47.068: INFO: (18) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 2.857558ms) Dec 23 01:54:47.069: INFO: (18) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test<... (200; 3.565221ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.68486ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.742115ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.802523ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 4.766077ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:462/proxy/: tls qux (200; 4.758835ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 4.945855ms) Dec 23 01:54:47.070: INFO: (18) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff/proxy/: test (200; 4.856542ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 5.575048ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname2/proxy/: tls qux (200; 5.901208ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 5.78624ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname2/proxy/: bar (200; 5.877806ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 5.845691ms) Dec 23 01:54:47.071: INFO: (18) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname1/proxy/: foo (200; 5.898785ms) Dec 23 01:54:47.075: INFO: (19) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:443/proxy/: test (200; 4.66826ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/https:proxy-service-kl8d4-nhlff:460/proxy/: tls baz (200; 4.77614ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/services/proxy-service-kl8d4:portname1/proxy/: foo (200; 4.720004ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:1080/proxy/: ... (200; 4.72211ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:160/proxy/: foo (200; 4.793893ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/services/http:proxy-service-kl8d4:portname2/proxy/: bar (200; 4.758017ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.816819ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/http:proxy-service-kl8d4-nhlff:162/proxy/: bar (200; 4.880973ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/services/https:proxy-service-kl8d4:tlsportname1/proxy/: tls baz (200; 4.792049ms) Dec 23 01:54:47.076: INFO: (19) /api/v1/namespaces/proxy-9562/pods/proxy-service-kl8d4-nhlff:1080/proxy/: test<... (200; 4.927459ms) STEP: deleting ReplicationController proxy-service-kl8d4 in namespace proxy-9562, will wait for the garbage collector to delete the pods Dec 23 01:54:47.134: INFO: Deleting ReplicationController proxy-service-kl8d4 took: 5.410902ms Dec 23 01:54:47.534: INFO: Terminating ReplicationController proxy-service-kl8d4 pods took: 400.234948ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:50.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9562" for this suite. • [SLOW TEST:14.540 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":29,"skipped":504,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:50.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 01:54:50.708: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 01:54:53.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285290, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285290, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285290, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285290, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 01:54:56.238: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:54:56.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7687-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:54:57.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-692" for this suite. STEP: Destroying namespace "webhook-692-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.264 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":30,"skipped":510,"failed":0} SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:54:57.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:54:57.621: INFO: Create a RollingUpdate DaemonSet Dec 23 01:54:57.626: INFO: Check that daemon pods launch on every node of the cluster Dec 23 01:54:57.631: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:54:57.633: INFO: Number of nodes with available pods: 0 Dec 23 01:54:57.633: INFO: Node jerma-worker is running more than one daemon pod Dec 23 01:54:58.638: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:54:58.640: INFO: Number of nodes with available pods: 0 Dec 23 01:54:58.640: INFO: Node jerma-worker is running more than one daemon pod Dec 23 01:54:59.643: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:54:59.647: INFO: Number of nodes with available pods: 0 Dec 23 01:54:59.647: INFO: Node jerma-worker is running more than one daemon pod Dec 23 01:55:00.638: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:55:00.641: INFO: Number of nodes with available pods: 0 Dec 23 01:55:00.641: INFO: Node jerma-worker is running more than one daemon pod Dec 23 01:55:01.638: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:55:01.642: INFO: Number of nodes with available pods: 2 Dec 23 01:55:01.642: INFO: Number of running nodes: 2, number of available pods: 2 Dec 23 01:55:01.642: INFO: Update the DaemonSet to trigger a rollout Dec 23 01:55:01.648: INFO: Updating DaemonSet daemon-set Dec 23 01:55:14.684: INFO: Roll back the DaemonSet before rollout is complete Dec 23 01:55:14.691: INFO: Updating DaemonSet daemon-set Dec 23 01:55:14.691: INFO: Make sure DaemonSet rollback is complete Dec 23 01:55:14.717: INFO: Wrong image for pod: daemon-set-mmmx8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 23 01:55:14.717: INFO: Pod daemon-set-mmmx8 is not available Dec 23 01:55:14.790: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:55:15.795: INFO: Wrong image for pod: daemon-set-mmmx8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 23 01:55:15.795: INFO: Pod daemon-set-mmmx8 is not available Dec 23 01:55:15.800: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:55:16.823: INFO: Wrong image for pod: daemon-set-mmmx8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 23 01:55:16.823: INFO: Pod daemon-set-mmmx8 is not available Dec 23 01:55:16.827: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 01:55:17.795: INFO: Pod daemon-set-55hvt is not available Dec 23 01:55:17.800: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3537, will wait for the garbage collector to delete the pods Dec 23 01:55:17.865: INFO: Deleting DaemonSet.extensions daemon-set took: 5.920075ms Dec 23 01:55:18.266: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.26941ms Dec 23 01:55:24.369: INFO: Number of nodes with available pods: 0 Dec 23 01:55:24.369: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 01:55:24.376: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3537/daemonsets","resourceVersion":"23925102"},"items":null} Dec 23 01:55:24.379: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3537/pods","resourceVersion":"23925102"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:55:24.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3537" for this suite. • [SLOW TEST:26.888 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":31,"skipped":515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:55:24.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1358 STEP: creating an pod Dec 23 01:55:24.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5132 -- logs-generator --log-lines-total 100 --run-duration 20s' Dec 23 01:55:24.603: INFO: stderr: "" Dec 23 01:55:24.603: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Dec 23 01:55:24.603: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Dec 23 01:55:24.603: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5132" to be "running and ready, or succeeded" Dec 23 01:55:24.606: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.752764ms Dec 23 01:55:26.610: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006744833s Dec 23 01:55:28.663: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.060138443s Dec 23 01:55:28.663: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Dec 23 01:55:28.663: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Dec 23 01:55:28.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132' Dec 23 01:55:28.788: INFO: stderr: "" Dec 23 01:55:28.788: INFO: stdout: "I1223 01:55:26.909940 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/nrdp 401\nI1223 01:55:27.110214 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/qpj7 338\nI1223 01:55:27.310218 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/9k62 471\nI1223 01:55:27.510156 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/rfb8 349\nI1223 01:55:27.710101 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/ssz 271\nI1223 01:55:27.910135 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/djs8 303\nI1223 01:55:28.110141 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/zgn 248\nI1223 01:55:28.310143 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/79t 542\nI1223 01:55:28.510129 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/zb9l 374\nI1223 01:55:28.710099 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/qlqt 377\n" STEP: limiting log lines Dec 23 01:55:28.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132 --tail=1' Dec 23 01:55:29.078: INFO: stderr: "" Dec 23 01:55:29.078: INFO: stdout: "I1223 01:55:28.910153 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/m4fp 553\n" Dec 23 01:55:29.078: INFO: got output "I1223 01:55:28.910153 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/m4fp 553\n" STEP: limiting log bytes Dec 23 01:55:29.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132 --limit-bytes=1' Dec 23 01:55:29.193: INFO: stderr: "" Dec 23 01:55:29.193: INFO: stdout: "I" Dec 23 01:55:29.193: INFO: got output "I" STEP: exposing timestamps Dec 23 01:55:29.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132 --tail=1 --timestamps' Dec 23 01:55:29.344: INFO: stderr: "" Dec 23 01:55:29.344: INFO: stdout: "2020-12-23T01:55:29.310239096Z I1223 01:55:29.310096 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/jts 238\n" Dec 23 01:55:29.344: INFO: got output "2020-12-23T01:55:29.310239096Z I1223 01:55:29.310096 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/jts 238\n" STEP: restricting to a time range Dec 23 01:55:31.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132 --since=1s' Dec 23 01:55:31.963: INFO: stderr: "" Dec 23 01:55:31.963: INFO: stdout: "I1223 01:55:31.110105 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/k5zm 342\nI1223 01:55:31.310166 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/rcj 303\nI1223 01:55:31.510128 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/cvg 468\nI1223 01:55:31.710145 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/f6j 226\nI1223 01:55:31.910078 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/z4r 435\n" Dec 23 01:55:31.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5132 --since=24h' Dec 23 01:55:32.070: INFO: stderr: "" Dec 23 01:55:32.070: INFO: stdout: "I1223 01:55:26.909940 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/nrdp 401\nI1223 01:55:27.110214 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/qpj7 338\nI1223 01:55:27.310218 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/9k62 471\nI1223 01:55:27.510156 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/rfb8 349\nI1223 01:55:27.710101 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/ssz 271\nI1223 01:55:27.910135 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/djs8 303\nI1223 01:55:28.110141 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/zgn 248\nI1223 01:55:28.310143 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/79t 542\nI1223 01:55:28.510129 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/zb9l 374\nI1223 01:55:28.710099 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/qlqt 377\nI1223 01:55:28.910153 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/m4fp 553\nI1223 01:55:29.110109 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/sbh 520\nI1223 01:55:29.310096 1 logs_generator.go:76] 12 POST /api/v1/namespaces/default/pods/jts 238\nI1223 01:55:29.510155 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/kube-system/pods/vnq 339\nI1223 01:55:29.710094 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/fjp 285\nI1223 01:55:29.910119 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/scg 431\nI1223 01:55:30.110123 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/mn5 571\nI1223 01:55:30.310104 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/szhl 410\nI1223 01:55:30.510123 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/f5b 487\nI1223 01:55:30.710144 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/hc2p 534\nI1223 01:55:30.910170 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/fhlj 404\nI1223 01:55:31.110105 1 logs_generator.go:76] 21 POST /api/v1/namespaces/kube-system/pods/k5zm 342\nI1223 01:55:31.310166 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/rcj 303\nI1223 01:55:31.510128 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/cvg 468\nI1223 01:55:31.710145 1 logs_generator.go:76] 24 POST /api/v1/namespaces/default/pods/f6j 226\nI1223 01:55:31.910078 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/z4r 435\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1364 Dec 23 01:55:32.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5132' Dec 23 01:55:34.361: INFO: stderr: "" Dec 23 01:55:34.361: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:55:34.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5132" for this suite. • [SLOW TEST:9.974 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1354 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":32,"skipped":539,"failed":0} [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:55:34.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:55:34.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4728" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":539,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:55:34.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Dec 23 01:55:34.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Dec 23 01:55:34.777: INFO: stderr: "" Dec 23 01:55:34.777: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39833\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:39833/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:55:34.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8496" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":34,"skipped":545,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:55:34.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 01:55:34.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb" in namespace "downward-api-3269" to be "success or failure" Dec 23 01:55:34.870: INFO: Pod "downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890243ms Dec 23 01:55:36.874: INFO: Pod "downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006877819s Dec 23 01:55:38.878: INFO: Pod "downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010437099s STEP: Saw pod success Dec 23 01:55:38.878: INFO: Pod "downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb" satisfied condition "success or failure" Dec 23 01:55:38.880: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb container client-container: STEP: delete the pod Dec 23 01:55:38.901: INFO: Waiting for pod downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb to disappear Dec 23 01:55:38.906: INFO: Pod downwardapi-volume-3f41eae7-6a91-4ecd-9e45-a10734b5dfeb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:55:38.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3269" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":559,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:55:38.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-859d STEP: Creating a pod to test atomic-volume-subpath Dec 23 01:55:39.005: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-859d" in namespace "subpath-949" to be "success or failure" Dec 23 01:55:39.031: INFO: Pod "pod-subpath-test-secret-859d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.369762ms Dec 23 01:55:41.035: INFO: Pod "pod-subpath-test-secret-859d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030548545s Dec 23 01:55:43.040: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 4.035694154s Dec 23 01:55:45.158: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 6.153697106s Dec 23 01:55:47.162: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 8.15785536s Dec 23 01:55:49.167: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 10.161965737s Dec 23 01:55:51.170: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 12.165708981s Dec 23 01:55:53.174: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 14.169886176s Dec 23 01:55:55.179: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 16.174306851s Dec 23 01:55:57.183: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 18.178340747s Dec 23 01:55:59.187: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 20.182655583s Dec 23 01:56:01.192: INFO: Pod "pod-subpath-test-secret-859d": Phase="Running", Reason="", readiness=true. Elapsed: 22.187205131s Dec 23 01:56:03.196: INFO: Pod "pod-subpath-test-secret-859d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.191164701s STEP: Saw pod success Dec 23 01:56:03.196: INFO: Pod "pod-subpath-test-secret-859d" satisfied condition "success or failure" Dec 23 01:56:03.199: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-859d container test-container-subpath-secret-859d: STEP: delete the pod Dec 23 01:56:03.234: INFO: Waiting for pod pod-subpath-test-secret-859d to disappear Dec 23 01:56:03.248: INFO: Pod pod-subpath-test-secret-859d no longer exists STEP: Deleting pod pod-subpath-test-secret-859d Dec 23 01:56:03.248: INFO: Deleting pod "pod-subpath-test-secret-859d" in namespace "subpath-949" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:56:03.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-949" for this suite. • [SLOW TEST:24.347 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":36,"skipped":566,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:56:03.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Dec 23 01:56:07.355: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3798 PodName:pod-sharedvolume-797011ec-8fd0-4f43-ba3a-70f9d768238c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 23 01:56:07.356: INFO: >>> kubeConfig: /root/.kube/config I1223 01:56:07.393410 6 log.go:172] (0xc001c9c210) (0xc0024c2b40) Create stream I1223 01:56:07.393448 6 log.go:172] (0xc001c9c210) (0xc0024c2b40) Stream added, broadcasting: 1 I1223 01:56:07.395665 6 log.go:172] (0xc001c9c210) Reply frame received for 1 I1223 01:56:07.395698 6 log.go:172] (0xc001c9c210) (0xc0024c2c80) Create stream I1223 01:56:07.395712 6 log.go:172] (0xc001c9c210) (0xc0024c2c80) Stream added, broadcasting: 3 I1223 01:56:07.396978 6 log.go:172] (0xc001c9c210) Reply frame received for 3 I1223 01:56:07.397013 6 log.go:172] (0xc001c9c210) (0xc0023c4820) Create stream I1223 01:56:07.397023 6 log.go:172] (0xc001c9c210) (0xc0023c4820) Stream added, broadcasting: 5 I1223 01:56:07.398012 6 log.go:172] (0xc001c9c210) Reply frame received for 5 I1223 01:56:07.466521 6 log.go:172] (0xc001c9c210) Data frame received for 5 I1223 01:56:07.466571 6 log.go:172] (0xc0023c4820) (5) Data frame handling I1223 01:56:07.466608 6 log.go:172] (0xc001c9c210) Data frame received for 3 I1223 01:56:07.466631 6 log.go:172] (0xc0024c2c80) (3) Data frame handling I1223 01:56:07.466641 6 log.go:172] (0xc0024c2c80) (3) Data frame sent I1223 01:56:07.466654 6 log.go:172] (0xc001c9c210) Data frame received for 3 I1223 01:56:07.466670 6 log.go:172] (0xc0024c2c80) (3) Data frame handling I1223 01:56:07.466692 6 log.go:172] (0xc001c9c210) Data frame received for 1 I1223 01:56:07.466707 6 log.go:172] (0xc0024c2b40) (1) Data frame handling I1223 01:56:07.466716 6 log.go:172] (0xc0024c2b40) (1) Data frame sent I1223 01:56:07.466726 6 log.go:172] (0xc001c9c210) (0xc0024c2b40) Stream removed, broadcasting: 1 I1223 01:56:07.466735 6 log.go:172] (0xc001c9c210) Go away received I1223 01:56:07.467086 6 log.go:172] (0xc001c9c210) (0xc0024c2b40) Stream removed, broadcasting: 1 I1223 01:56:07.467104 6 log.go:172] (0xc001c9c210) (0xc0024c2c80) Stream removed, broadcasting: 3 I1223 01:56:07.467112 6 log.go:172] (0xc001c9c210) (0xc0023c4820) Stream removed, broadcasting: 5 Dec 23 01:56:07.467: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:56:07.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3798" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":37,"skipped":581,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:56:07.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5179.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 01:56:13.636: INFO: DNS probes using dns-5179/dns-test-d7cb672e-7db4-463d-b116-326fcac6c9b1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:56:13.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5179" for this suite. • [SLOW TEST:6.248 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":38,"skipped":582,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:56:13.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 01:56:14.884: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 01:56:17.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285375, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285375, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285375, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285374, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 01:56:20.239: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:56:20.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7713" for this suite. STEP: Destroying namespace "webhook-7713-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.791 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":39,"skipped":587,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:56:20.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Dec 23 01:56:28.686: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:28.706: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:30.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:30.778: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:32.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:32.710: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:34.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:34.709: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:36.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:36.716: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:38.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:38.710: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:40.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:40.710: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:42.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:42.724: INFO: Pod pod-with-prestop-http-hook still exists Dec 23 01:56:44.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Dec 23 01:56:44.710: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:56:44.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9401" for this suite. • [SLOW TEST:24.210 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":594,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:56:44.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:53 [It] should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Dec 23 01:56:48.824: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Dec 23 01:57:08.931: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:57:08.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3807" for this suite. • [SLOW TEST:24.220 seconds] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":41,"skipped":594,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:57:08.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 23 01:57:17.127: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 01:57:17.155: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 01:57:19.155: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 01:57:19.216: INFO: Pod pod-with-poststart-http-hook still exists Dec 23 01:57:21.155: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Dec 23 01:57:21.159: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:57:21.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4688" for this suite. • [SLOW TEST:12.225 seconds] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":595,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:57:21.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3032 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3032 I1223 01:57:21.399662 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3032, replica count: 2 I1223 01:57:24.450091 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 01:57:27.450333 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 23 01:57:27.450: INFO: Creating new exec pod Dec 23 01:57:32.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3032 execpodlvbgg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Dec 23 01:57:32.735: INFO: stderr: "I1223 01:57:32.595259 373 log.go:172] (0xc000add970) (0xc000ac6320) Create stream\nI1223 01:57:32.595310 373 log.go:172] (0xc000add970) (0xc000ac6320) Stream added, broadcasting: 1\nI1223 01:57:32.598323 373 log.go:172] (0xc000add970) Reply frame received for 1\nI1223 01:57:32.598379 373 log.go:172] (0xc000add970) (0xc000b3e320) Create stream\nI1223 01:57:32.598397 373 log.go:172] (0xc000add970) (0xc000b3e320) Stream added, broadcasting: 3\nI1223 01:57:32.599422 373 log.go:172] (0xc000add970) Reply frame received for 3\nI1223 01:57:32.599466 373 log.go:172] (0xc000add970) (0xc000ac63c0) Create stream\nI1223 01:57:32.599482 373 log.go:172] (0xc000add970) (0xc000ac63c0) Stream added, broadcasting: 5\nI1223 01:57:32.600458 373 log.go:172] (0xc000add970) Reply frame received for 5\nI1223 01:57:32.706507 373 log.go:172] (0xc000add970) Data frame received for 5\nI1223 01:57:32.706537 373 log.go:172] (0xc000ac63c0) (5) Data frame handling\nI1223 01:57:32.706559 373 log.go:172] (0xc000ac63c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1223 01:57:32.725866 373 log.go:172] (0xc000add970) Data frame received for 5\nI1223 01:57:32.725900 373 log.go:172] (0xc000ac63c0) (5) Data frame handling\nI1223 01:57:32.725927 373 log.go:172] (0xc000ac63c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1223 01:57:32.726437 373 log.go:172] (0xc000add970) Data frame received for 3\nI1223 01:57:32.726490 373 log.go:172] (0xc000b3e320) (3) Data frame handling\nI1223 01:57:32.726596 373 log.go:172] (0xc000add970) Data frame received for 5\nI1223 01:57:32.726618 373 log.go:172] (0xc000ac63c0) (5) Data frame handling\nI1223 01:57:32.728337 373 log.go:172] (0xc000add970) Data frame received for 1\nI1223 01:57:32.728356 373 log.go:172] (0xc000ac6320) (1) Data frame handling\nI1223 01:57:32.728370 373 log.go:172] (0xc000ac6320) (1) Data frame sent\nI1223 01:57:32.728386 373 log.go:172] (0xc000add970) (0xc000ac6320) Stream removed, broadcasting: 1\nI1223 01:57:32.728409 373 log.go:172] (0xc000add970) Go away received\nI1223 01:57:32.728762 373 log.go:172] (0xc000add970) (0xc000ac6320) Stream removed, broadcasting: 1\nI1223 01:57:32.728780 373 log.go:172] (0xc000add970) (0xc000b3e320) Stream removed, broadcasting: 3\nI1223 01:57:32.728789 373 log.go:172] (0xc000add970) (0xc000ac63c0) Stream removed, broadcasting: 5\n" Dec 23 01:57:32.735: INFO: stdout: "" Dec 23 01:57:32.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3032 execpodlvbgg -- /bin/sh -x -c nc -zv -t -w 2 10.105.154.31 80' Dec 23 01:57:32.929: INFO: stderr: "I1223 01:57:32.850187 393 log.go:172] (0xc0009389a0) (0xc0006ec1e0) Create stream\nI1223 01:57:32.850239 393 log.go:172] (0xc0009389a0) (0xc0006ec1e0) Stream added, broadcasting: 1\nI1223 01:57:32.852362 393 log.go:172] (0xc0009389a0) Reply frame received for 1\nI1223 01:57:32.852410 393 log.go:172] (0xc0009389a0) (0xc00029b900) Create stream\nI1223 01:57:32.852426 393 log.go:172] (0xc0009389a0) (0xc00029b900) Stream added, broadcasting: 3\nI1223 01:57:32.853410 393 log.go:172] (0xc0009389a0) Reply frame received for 3\nI1223 01:57:32.853449 393 log.go:172] (0xc0009389a0) (0xc0006ec320) Create stream\nI1223 01:57:32.853460 393 log.go:172] (0xc0009389a0) (0xc0006ec320) Stream added, broadcasting: 5\nI1223 01:57:32.854095 393 log.go:172] (0xc0009389a0) Reply frame received for 5\nI1223 01:57:32.916988 393 log.go:172] (0xc0009389a0) Data frame received for 3\nI1223 01:57:32.917039 393 log.go:172] (0xc00029b900) (3) Data frame handling\nI1223 01:57:32.917088 393 log.go:172] (0xc0009389a0) Data frame received for 5\nI1223 01:57:32.917115 393 log.go:172] (0xc0006ec320) (5) Data frame handling\nI1223 01:57:32.917141 393 log.go:172] (0xc0006ec320) (5) Data frame sent\nI1223 01:57:32.917156 393 log.go:172] (0xc0009389a0) Data frame received for 5\nI1223 01:57:32.917171 393 log.go:172] (0xc0006ec320) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.154.31 80\nConnection to 10.105.154.31 80 port [tcp/http] succeeded!\nI1223 01:57:32.918813 393 log.go:172] (0xc0009389a0) Data frame received for 1\nI1223 01:57:32.918841 393 log.go:172] (0xc0006ec1e0) (1) Data frame handling\nI1223 01:57:32.918861 393 log.go:172] (0xc0006ec1e0) (1) Data frame sent\nI1223 01:57:32.918875 393 log.go:172] (0xc0009389a0) (0xc0006ec1e0) Stream removed, broadcasting: 1\nI1223 01:57:32.918891 393 log.go:172] (0xc0009389a0) Go away received\nI1223 01:57:32.919394 393 log.go:172] (0xc0009389a0) (0xc0006ec1e0) Stream removed, broadcasting: 1\nI1223 01:57:32.919419 393 log.go:172] (0xc0009389a0) (0xc00029b900) Stream removed, broadcasting: 3\nI1223 01:57:32.919427 393 log.go:172] (0xc0009389a0) (0xc0006ec320) Stream removed, broadcasting: 5\n" Dec 23 01:57:32.930: INFO: stdout: "" Dec 23 01:57:32.930: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:57:32.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3032" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.824 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":43,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:57:32.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Dec 23 01:57:33.027: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:57:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2386" for this suite. • [SLOW TEST:15.676 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":44,"skipped":667,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:57:48.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:57:54.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1978" for this suite. STEP: Destroying namespace "nsdeletetest-8682" for this suite. Dec 23 01:57:55.006: INFO: Namespace nsdeletetest-8682 was already deleted STEP: Destroying namespace "nsdeletetest-3908" for this suite. • [SLOW TEST:6.340 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":45,"skipped":686,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:57:55.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1223 01:58:25.625488 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 01:58:25.625: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:25.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3549" for this suite. • [SLOW TEST:30.622 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":46,"skipped":696,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:25.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Dec 23 01:58:25.977: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Dec 23 01:58:27.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285506, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285506, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285506, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285505, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 01:58:31.017: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:58:31.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:32.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-958" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.554 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":47,"skipped":717,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:32.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 01:58:32.297: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60" in namespace "projected-481" to be "success or failure" Dec 23 01:58:32.570: INFO: Pod "downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60": Phase="Pending", Reason="", readiness=false. Elapsed: 273.162506ms Dec 23 01:58:34.575: INFO: Pod "downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277865509s Dec 23 01:58:36.579: INFO: Pod "downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.281508631s STEP: Saw pod success Dec 23 01:58:36.579: INFO: Pod "downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60" satisfied condition "success or failure" Dec 23 01:58:36.581: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60 container client-container: STEP: delete the pod Dec 23 01:58:36.607: INFO: Waiting for pod downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60 to disappear Dec 23 01:58:36.611: INFO: Pod downwardapi-volume-3669fe32-f5bf-4d98-8a58-79478fa71d60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:36.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-481" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":724,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:36.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 01:58:37.486: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 01:58:39.496: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285517, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285517, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285517, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285517, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 01:58:42.533: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:42.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2581" for this suite. STEP: Destroying namespace "webhook-2581-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.189 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":49,"skipped":734,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:42.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1276 STEP: creating the pod Dec 23 01:58:42.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1582' Dec 23 01:58:43.216: INFO: stderr: "" Dec 23 01:58:43.216: INFO: stdout: "pod/pause created\n" Dec 23 01:58:43.216: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 23 01:58:43.216: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1582" to be "running and ready" Dec 23 01:58:43.254: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 38.44847ms Dec 23 01:58:45.338: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122094346s Dec 23 01:58:47.342: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.125791031s Dec 23 01:58:47.342: INFO: Pod "pause" satisfied condition "running and ready" Dec 23 01:58:47.342: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Dec 23 01:58:47.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1582' Dec 23 01:58:47.458: INFO: stderr: "" Dec 23 01:58:47.458: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 23 01:58:47.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1582' Dec 23 01:58:47.560: INFO: stderr: "" Dec 23 01:58:47.560: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 23 01:58:47.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1582' Dec 23 01:58:47.668: INFO: stderr: "" Dec 23 01:58:47.668: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 23 01:58:47.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1582' Dec 23 01:58:47.756: INFO: stderr: "" Dec 23 01:58:47.756: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1283 STEP: using delete to clean up resources Dec 23 01:58:47.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1582' Dec 23 01:58:47.957: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 01:58:47.957: INFO: stdout: "pod \"pause\" force deleted\n" Dec 23 01:58:47.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1582' Dec 23 01:58:48.318: INFO: stderr: "No resources found in kubectl-1582 namespace.\n" Dec 23 01:58:48.319: INFO: stdout: "" Dec 23 01:58:48.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1582 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 01:58:48.407: INFO: stderr: "" Dec 23 01:58:48.407: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1582" for this suite. • [SLOW TEST:5.607 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1273 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":50,"skipped":735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:48.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:58:48.520: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-f333de09-0519-4527-a7f2-2418be019a03" in namespace "security-context-test-9846" to be "success or failure" Dec 23 01:58:48.522: INFO: Pod "busybox-readonly-false-f333de09-0519-4527-a7f2-2418be019a03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490923ms Dec 23 01:58:50.526: INFO: Pod "busybox-readonly-false-f333de09-0519-4527-a7f2-2418be019a03": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006464209s Dec 23 01:58:52.530: INFO: Pod "busybox-readonly-false-f333de09-0519-4527-a7f2-2418be019a03": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010416765s Dec 23 01:58:52.530: INFO: Pod "busybox-readonly-false-f333de09-0519-4527-a7f2-2418be019a03" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:58:52.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9846" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":780,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:58:52.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 01:58:52.642: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Dec 23 01:58:52.655: INFO: Number of nodes with available pods: 0 Dec 23 01:58:52.655: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Dec 23 01:58:52.688: INFO: Number of nodes with available pods: 0 Dec 23 01:58:52.688: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:53.709: INFO: Number of nodes with available pods: 0 Dec 23 01:58:53.709: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:54.692: INFO: Number of nodes with available pods: 0 Dec 23 01:58:54.692: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:55.692: INFO: Number of nodes with available pods: 0 Dec 23 01:58:55.692: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:56.692: INFO: Number of nodes with available pods: 1 Dec 23 01:58:56.692: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Dec 23 01:58:56.730: INFO: Number of nodes with available pods: 1 Dec 23 01:58:56.730: INFO: Number of running nodes: 0, number of available pods: 1 Dec 23 01:58:57.734: INFO: Number of nodes with available pods: 0 Dec 23 01:58:57.734: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Dec 23 01:58:57.741: INFO: Number of nodes with available pods: 0 Dec 23 01:58:57.741: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:58.745: INFO: Number of nodes with available pods: 0 Dec 23 01:58:58.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:58:59.745: INFO: Number of nodes with available pods: 0 Dec 23 01:58:59.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:00.745: INFO: Number of nodes with available pods: 0 Dec 23 01:59:00.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:01.745: INFO: Number of nodes with available pods: 0 Dec 23 01:59:01.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:02.745: INFO: Number of nodes with available pods: 0 Dec 23 01:59:02.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:03.770: INFO: Number of nodes with available pods: 0 Dec 23 01:59:03.770: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:04.745: INFO: Number of nodes with available pods: 0 Dec 23 01:59:04.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:05.745: INFO: Number of nodes with available pods: 0 Dec 23 01:59:05.745: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:06.744: INFO: Number of nodes with available pods: 0 Dec 23 01:59:06.744: INFO: Node jerma-worker2 is running more than one daemon pod Dec 23 01:59:07.745: INFO: Number of nodes with available pods: 1 Dec 23 01:59:07.745: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7529, will wait for the garbage collector to delete the pods Dec 23 01:59:07.809: INFO: Deleting DaemonSet.extensions daemon-set took: 6.106963ms Dec 23 01:59:08.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.261598ms Dec 23 01:59:14.112: INFO: Number of nodes with available pods: 0 Dec 23 01:59:14.112: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 01:59:14.178: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7529/daemonsets","resourceVersion":"23926792"},"items":null} Dec 23 01:59:14.180: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7529/pods","resourceVersion":"23926792"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:59:14.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7529" for this suite. • [SLOW TEST:21.849 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":52,"skipped":793,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:59:14.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:59:18.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-791" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":53,"skipped":796,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:59:18.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-07f5ea9e-5ab5-4519-9fa7-e3649aad19df STEP: Creating a pod to test consume secrets Dec 23 01:59:18.891: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805" in namespace "projected-6949" to be "success or failure" Dec 23 01:59:18.948: INFO: Pod "pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805": Phase="Pending", Reason="", readiness=false. Elapsed: 57.319474ms Dec 23 01:59:20.952: INFO: Pod "pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061083691s Dec 23 01:59:22.956: INFO: Pod "pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065615272s STEP: Saw pod success Dec 23 01:59:22.957: INFO: Pod "pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805" satisfied condition "success or failure" Dec 23 01:59:22.959: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805 container secret-volume-test: STEP: delete the pod Dec 23 01:59:23.130: INFO: Waiting for pod pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805 to disappear Dec 23 01:59:23.165: INFO: Pod pod-projected-secrets-ce6bd101-bb8a-44b4-a720-a4547a91f805 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:59:23.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6949" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":829,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:59:23.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Dec 23 01:59:23.377: INFO: Waiting up to 5m0s for pod "var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc" in namespace "var-expansion-1263" to be "success or failure" Dec 23 01:59:23.395: INFO: Pod "var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.88863ms Dec 23 01:59:25.398: INFO: Pod "var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021632984s Dec 23 01:59:27.402: INFO: Pod "var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025537738s STEP: Saw pod success Dec 23 01:59:27.402: INFO: Pod "var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc" satisfied condition "success or failure" Dec 23 01:59:27.405: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc container dapi-container: STEP: delete the pod Dec 23 01:59:27.425: INFO: Waiting for pod var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc to disappear Dec 23 01:59:27.469: INFO: Pod var-expansion-0f116906-2bcb-4e10-828d-b7b2e1cda6fc no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 01:59:27.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1263" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":850,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 01:59:27.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1223 02:00:08.516123 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Dec 23 02:00:08.516: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:00:08.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1659" for this suite. • [SLOW TEST:41.048 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":56,"skipped":859,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:00:08.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-dab7c73e-080d-4371-a3c7-d6faa54623d8 STEP: Creating a pod to test consume configMaps Dec 23 02:00:08.607: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321" in namespace "configmap-5519" to be "success or failure" Dec 23 02:00:08.611: INFO: Pod "pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493526ms Dec 23 02:00:10.615: INFO: Pod "pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007918957s Dec 23 02:00:12.619: INFO: Pod "pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011946439s STEP: Saw pod success Dec 23 02:00:12.619: INFO: Pod "pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321" satisfied condition "success or failure" Dec 23 02:00:12.623: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321 container configmap-volume-test: STEP: delete the pod Dec 23 02:00:12.667: INFO: Waiting for pod pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321 to disappear Dec 23 02:00:12.757: INFO: Pod pod-configmaps-6c45fcf0-14c7-4f81-85ce-e7a83f76e321 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:00:12.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5519" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":888,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:00:12.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:00:12.856: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Dec 23 02:00:12.886: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:12.894: INFO: Number of nodes with available pods: 0 Dec 23 02:00:12.894: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:13.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:14.000: INFO: Number of nodes with available pods: 0 Dec 23 02:00:14.000: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:14.904: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:15.341: INFO: Number of nodes with available pods: 0 Dec 23 02:00:15.341: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:16.165: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:16.173: INFO: Number of nodes with available pods: 0 Dec 23 02:00:16.173: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:16.951: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:17.171: INFO: Number of nodes with available pods: 0 Dec 23 02:00:17.171: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:18.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:18.228: INFO: Number of nodes with available pods: 0 Dec 23 02:00:18.228: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:18.931: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:18.988: INFO: Number of nodes with available pods: 1 Dec 23 02:00:18.988: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:19.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:20.165: INFO: Number of nodes with available pods: 2 Dec 23 02:00:20.165: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Dec 23 02:00:20.391: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:20.391: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:20.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:21.639: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:21.639: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:21.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:22.620: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:22.620: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:22.629: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:23.530: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:23.530: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:23.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:24.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:24.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:24.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:24.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:25.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:25.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:25.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:25.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:26.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:26.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:26.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:26.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:27.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:27.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:27.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:27.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:28.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:28.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:28.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:28.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:29.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:29.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:29.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:29.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:30.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:30.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:30.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:30.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:31.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:31.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:31.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:31.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:32.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:32.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:32.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:32.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:33.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:33.532: INFO: Wrong image for pod: daemon-set-k7r9c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:33.532: INFO: Pod daemon-set-k7r9c is not available Dec 23 02:00:33.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:34.548: INFO: Pod daemon-set-56kml is not available Dec 23 02:00:34.548: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:34.553: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:35.563: INFO: Pod daemon-set-56kml is not available Dec 23 02:00:35.563: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:35.567: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:36.531: INFO: Pod daemon-set-56kml is not available Dec 23 02:00:36.531: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:36.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:37.531: INFO: Pod daemon-set-56kml is not available Dec 23 02:00:37.531: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:37.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:38.554: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:38.558: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:39.531: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:39.531: INFO: Pod daemon-set-bzw2d is not available Dec 23 02:00:39.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:40.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:40.532: INFO: Pod daemon-set-bzw2d is not available Dec 23 02:00:40.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:41.533: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:41.533: INFO: Pod daemon-set-bzw2d is not available Dec 23 02:00:41.539: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:42.532: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:42.532: INFO: Pod daemon-set-bzw2d is not available Dec 23 02:00:42.535: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:43.531: INFO: Wrong image for pod: daemon-set-bzw2d. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Dec 23 02:00:43.531: INFO: Pod daemon-set-bzw2d is not available Dec 23 02:00:43.533: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:44.530: INFO: Pod daemon-set-vbbmj is not available Dec 23 02:00:44.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Dec 23 02:00:44.537: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:44.539: INFO: Number of nodes with available pods: 1 Dec 23 02:00:44.539: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:45.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:45.546: INFO: Number of nodes with available pods: 1 Dec 23 02:00:45.546: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:46.545: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:46.548: INFO: Number of nodes with available pods: 1 Dec 23 02:00:46.548: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:00:47.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:00:47.546: INFO: Number of nodes with available pods: 2 Dec 23 02:00:47.546: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8552, will wait for the garbage collector to delete the pods Dec 23 02:00:47.636: INFO: Deleting DaemonSet.extensions daemon-set took: 23.857595ms Dec 23 02:00:47.736: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.22564ms Dec 23 02:00:53.750: INFO: Number of nodes with available pods: 0 Dec 23 02:00:53.750: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 02:00:53.756: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8552/daemonsets","resourceVersion":"23927768"},"items":null} Dec 23 02:00:53.758: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8552/pods","resourceVersion":"23927768"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:00:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8552" for this suite. • [SLOW TEST:41.008 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":58,"skipped":890,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:00:53.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:00:53.914: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399" in namespace "downward-api-5993" to be "success or failure" Dec 23 02:00:53.936: INFO: Pod "downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399": Phase="Pending", Reason="", readiness=false. Elapsed: 21.724911ms Dec 23 02:00:55.945: INFO: Pod "downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030411175s Dec 23 02:00:57.949: INFO: Pod "downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034462153s STEP: Saw pod success Dec 23 02:00:57.949: INFO: Pod "downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399" satisfied condition "success or failure" Dec 23 02:00:57.951: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399 container client-container: STEP: delete the pod Dec 23 02:00:58.186: INFO: Waiting for pod downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399 to disappear Dec 23 02:00:58.210: INFO: Pod downwardapi-volume-a30a0094-240c-48cb-a471-e1c66bbb5399 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:00:58.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5993" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":898,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:00:58.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 02:00:59.052: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 02:01:01.062: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285658, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 02:01:03.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285659, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285658, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 02:01:06.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:01:06.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8034-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:01:07.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4095" for this suite. STEP: Destroying namespace "webhook-4095-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.143 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":60,"skipped":901,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:01:07.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 02:01:08.268: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 02:01:10.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285668, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285668, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285668, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744285668, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 02:01:13.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:01:13.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-276" for this suite. STEP: Destroying namespace "webhook-276-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.669 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":61,"skipped":901,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:01:14.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:01:14.157: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1" in namespace "downward-api-5925" to be "success or failure" Dec 23 02:01:14.164: INFO: Pod "downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.212167ms Dec 23 02:01:16.167: INFO: Pod "downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010613546s Dec 23 02:01:18.173: INFO: Pod "downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016419505s STEP: Saw pod success Dec 23 02:01:18.173: INFO: Pod "downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1" satisfied condition "success or failure" Dec 23 02:01:18.176: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1 container client-container: STEP: delete the pod Dec 23 02:01:18.359: INFO: Waiting for pod downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1 to disappear Dec 23 02:01:18.361: INFO: Pod downwardapi-volume-b7934554-07a1-468f-a6f1-d88ee4f9d9f1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:01:18.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5925" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":923,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:01:18.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:01:18.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Dec 23 02:01:19.213: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:19Z generation:1 name:name1 resourceVersion:23928115 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aaa079d0-4527-4c71-b20e-e6006d0aec0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Dec 23 02:01:29.217: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:29Z generation:1 name:name2 resourceVersion:23928168 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46d93e20-2361-4a35-bf93-35d86be164da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Dec 23 02:01:39.238: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:19Z generation:2 name:name1 resourceVersion:23928221 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aaa079d0-4527-4c71-b20e-e6006d0aec0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Dec 23 02:01:49.250: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:29Z generation:2 name:name2 resourceVersion:23928285 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46d93e20-2361-4a35-bf93-35d86be164da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Dec 23 02:01:59.258: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:19Z generation:2 name:name1 resourceVersion:23928337 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aaa079d0-4527-4c71-b20e-e6006d0aec0c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Dec 23 02:02:09.267: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-12-23T02:01:29Z generation:2 name:name2 resourceVersion:23928370 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:46d93e20-2361-4a35-bf93-35d86be164da] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:02:19.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1643" for this suite. • [SLOW TEST:61.417 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":63,"skipped":931,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:02:19.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f6446479-e0e5-40cf-ae10-cc4023b518b3 STEP: Creating a pod to test consume configMaps Dec 23 02:02:20.053: INFO: Waiting up to 5m0s for pod "pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b" in namespace "configmap-1475" to be "success or failure" Dec 23 02:02:20.058: INFO: Pod "pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769035ms Dec 23 02:02:22.281: INFO: Pod "pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.227182449s Dec 23 02:02:24.285: INFO: Pod "pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.231685917s STEP: Saw pod success Dec 23 02:02:24.285: INFO: Pod "pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b" satisfied condition "success or failure" Dec 23 02:02:24.288: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b container configmap-volume-test: STEP: delete the pod Dec 23 02:02:24.401: INFO: Waiting for pod pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b to disappear Dec 23 02:02:24.452: INFO: Pod pod-configmaps-eeb9454f-07e5-488f-ac7b-42b1cd1d589b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:02:24.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1475" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":934,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:02:24.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Dec 23 02:02:24.525: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Dec 23 02:02:24.538: INFO: Waiting for terminating namespaces to be deleted... Dec 23 02:02:24.540: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Dec 23 02:02:24.556: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.556: INFO: Container kube-proxy ready: true, restart count 0 Dec 23 02:02:24.556: INFO: chaos-daemon-r2kj7 from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.556: INFO: Container chaos-daemon ready: true, restart count 0 Dec 23 02:02:24.556: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.556: INFO: Container kindnet-cni ready: true, restart count 0 Dec 23 02:02:24.556: INFO: chaos-controller-manager-7f9bbd476f-jm8nf from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.556: INFO: Container chaos-mesh ready: true, restart count 0 Dec 23 02:02:24.556: INFO: rally-44b0fc03-73ejjtg5 from c-rally-44b0fc03-zq9ifhwq started at 2020-12-23 02:02:12 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.556: INFO: Container rally-44b0fc03-73ejjtg5 ready: true, restart count 0 Dec 23 02:02:24.556: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Dec 23 02:02:24.560: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.560: INFO: Container kindnet-cni ready: true, restart count 0 Dec 23 02:02:24.560: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.560: INFO: Container kube-proxy ready: true, restart count 0 Dec 23 02:02:24.560: INFO: rally-44b0fc03-73ejjtg5-h6b8t from c-rally-44b0fc03-zq9ifhwq started at 2020-12-23 02:02:17 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.560: INFO: Container rally-44b0fc03-73ejjtg5 ready: false, restart count 0 Dec 23 02:02:24.560: INFO: chaos-daemon-mzgg5 from default started at 2020-11-22 21:56:28 +0000 UTC (1 container statuses recorded) Dec 23 02:02:24.560: INFO: Container chaos-daemon ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ac1ba76a-6446-4d78-840a-2e561da0aab5 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-ac1ba76a-6446-4d78-840a-2e561da0aab5 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-ac1ba76a-6446-4d78-840a-2e561da0aab5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:07:32.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-111" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":65,"skipped":935,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:07:32.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Dec 23 02:07:32.858: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Dec 23 02:07:33.273: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Dec 23 02:07:35.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 02:07:37.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286053, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 02:07:40.387: INFO: Waited 622.045426ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:07:40.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9953" for this suite. • [SLOW TEST:8.246 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":66,"skipped":968,"failed":0} SSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:07:40.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:07:41.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2155" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":67,"skipped":973,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:07:41.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 in namespace container-probe-3311 Dec 23 02:07:45.364: INFO: Started pod liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 in namespace container-probe-3311 STEP: checking the pod's current state and verifying that restartCount is present Dec 23 02:07:45.366: INFO: Initial restart count of pod liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is 0 Dec 23 02:08:01.400: INFO: Restart count of pod container-probe-3311/liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is now 1 (16.03426514s elapsed) Dec 23 02:08:21.470: INFO: Restart count of pod container-probe-3311/liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is now 2 (36.103758489s elapsed) Dec 23 02:08:41.511: INFO: Restart count of pod container-probe-3311/liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is now 3 (56.14543015s elapsed) Dec 23 02:08:59.549: INFO: Restart count of pod container-probe-3311/liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is now 4 (1m14.182603268s elapsed) Dec 23 02:10:11.711: INFO: Restart count of pod container-probe-3311/liveness-426ad4ef-ca63-4fdc-97a2-9f506988f8c0 is now 5 (2m26.34549198s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:10:11.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3311" for this suite. • [SLOW TEST:150.451 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":974,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:10:11.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Dec 23 02:10:11.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3739' Dec 23 02:10:15.627: INFO: stderr: "" Dec 23 02:10:15.627: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Dec 23 02:10:15.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3739' Dec 23 02:10:15.751: INFO: stderr: "" Dec 23 02:10:15.751: INFO: stdout: "update-demo-nautilus-7fdvq update-demo-nautilus-ds6lv " Dec 23 02:10:15.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fdvq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3739' Dec 23 02:10:15.861: INFO: stderr: "" Dec 23 02:10:15.861: INFO: stdout: "" Dec 23 02:10:15.861: INFO: update-demo-nautilus-7fdvq is created but not running Dec 23 02:10:20.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3739' Dec 23 02:10:20.952: INFO: stderr: "" Dec 23 02:10:20.952: INFO: stdout: "update-demo-nautilus-7fdvq update-demo-nautilus-ds6lv " Dec 23 02:10:20.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fdvq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3739' Dec 23 02:10:21.046: INFO: stderr: "" Dec 23 02:10:21.046: INFO: stdout: "true" Dec 23 02:10:21.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7fdvq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3739' Dec 23 02:10:21.141: INFO: stderr: "" Dec 23 02:10:21.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 02:10:21.141: INFO: validating pod update-demo-nautilus-7fdvq Dec 23 02:10:21.145: INFO: got data: { "image": "nautilus.jpg" } Dec 23 02:10:21.145: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 02:10:21.145: INFO: update-demo-nautilus-7fdvq is verified up and running Dec 23 02:10:21.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ds6lv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3739' Dec 23 02:10:21.239: INFO: stderr: "" Dec 23 02:10:21.239: INFO: stdout: "true" Dec 23 02:10:21.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ds6lv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3739' Dec 23 02:10:21.352: INFO: stderr: "" Dec 23 02:10:21.352: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Dec 23 02:10:21.352: INFO: validating pod update-demo-nautilus-ds6lv Dec 23 02:10:21.362: INFO: got data: { "image": "nautilus.jpg" } Dec 23 02:10:21.362: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Dec 23 02:10:21.362: INFO: update-demo-nautilus-ds6lv is verified up and running STEP: using delete to clean up resources Dec 23 02:10:21.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3739' Dec 23 02:10:21.475: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 23 02:10:21.475: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Dec 23 02:10:21.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3739' Dec 23 02:10:21.590: INFO: stderr: "No resources found in kubectl-3739 namespace.\n" Dec 23 02:10:21.590: INFO: stdout: "" Dec 23 02:10:21.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3739 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 02:10:21.700: INFO: stderr: "" Dec 23 02:10:21.700: INFO: stdout: "update-demo-nautilus-7fdvq\nupdate-demo-nautilus-ds6lv\n" Dec 23 02:10:22.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3739' Dec 23 02:10:22.297: INFO: stderr: "No resources found in kubectl-3739 namespace.\n" Dec 23 02:10:22.297: INFO: stdout: "" Dec 23 02:10:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3739 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 23 02:10:22.403: INFO: stderr: "" Dec 23 02:10:22.403: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:10:22.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3739" for this suite. • [SLOW TEST:10.676 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":69,"skipped":997,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:10:22.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Dec 23 02:10:27.508: INFO: Successfully updated pod "labelsupdate17890186-6e9b-48af-957a-d0ce3ada194a" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:10:31.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4454" for this suite. • [SLOW TEST:9.135 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1005,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:10:31.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:10:31.652: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Dec 23 02:10:36.659: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Dec 23 02:10:36.659: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Dec 23 02:10:36.707: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3348 /apis/apps/v1/namespaces/deployment-3348/deployments/test-cleanup-deployment 77c78860-f84b-44fb-89d2-20bb1540c3f6 23930279 1 2020-12-23 02:10:36 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00309bf08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Dec 23 02:10:36.729: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3348 /apis/apps/v1/namespaces/deployment-3348/replicasets/test-cleanup-deployment-55ffc6b7b6 7fb2b0b0-9370-483f-9616-a090a8abbb09 23930285 1 2020-12-23 02:10:36 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 77c78860-f84b-44fb-89d2-20bb1540c3f6 0xc001c98347 0xc001c98348}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c983b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 23 02:10:36.729: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Dec 23 02:10:36.729: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3348 /apis/apps/v1/namespaces/deployment-3348/replicasets/test-cleanup-controller fad68763-4be6-428d-8327-9ec2ec7d2075 23930280 1 2020-12-23 02:10:31 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 77c78860-f84b-44fb-89d2-20bb1540c3f6 0xc001c9824f 0xc001c98260}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001c982d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Dec 23 02:10:36.786: INFO: Pod "test-cleanup-controller-q749p" is available: &Pod{ObjectMeta:{test-cleanup-controller-q749p test-cleanup-controller- deployment-3348 /api/v1/namespaces/deployment-3348/pods/test-cleanup-controller-q749p a0e8310a-bdc3-4959-a782-f94ae2327709 23930266 0 2020-12-23 02:10:31 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller fad68763-4be6-428d-8327-9ec2ec7d2075 0xc001c98877 0xc001c98878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6q4lj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6q4lj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6q4lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:10:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:10:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:10:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.2,StartTime:2020-12-23 02:10:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:10:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da25ef259da70459b099d415935a62186087eb673d1497ae3f919879afb60e4b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Dec 23 02:10:36.786: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-2htwn" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-2htwn test-cleanup-deployment-55ffc6b7b6- deployment-3348 /api/v1/namespaces/deployment-3348/pods/test-cleanup-deployment-55ffc6b7b6-2htwn 6110566a-e3bc-487f-9847-9dc795341d99 23930287 0 2020-12-23 02:10:36 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 7fb2b0b0-9370-483f-9616-a090a8abbb09 0xc001c98c07 0xc001c98c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6q4lj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6q4lj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6q4lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:10:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:10:36.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3348" for this suite. • [SLOW TEST:5.251 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":71,"skipped":1016,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:10:36.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Dec 23 02:10:36.840: INFO: PodSpec: initContainers in spec.initContainers Dec 23 02:11:28.809: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9e49478b-2a14-4fdd-82c9-72571af41816", GenerateName:"", Namespace:"init-container-2724", SelfLink:"/api/v1/namespaces/init-container-2724/pods/pod-init-9e49478b-2a14-4fdd-82c9-72571af41816", UID:"091e13b8-bc1c-44b3-a590-e6a5d6ca2c91", ResourceVersion:"23930513", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63744286236, loc:(*time.Location)(0x791c680)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"840253418"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8dcb5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0037a96c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dcb5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dcb5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dcb5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001d391c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002eb7d40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d39340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001d39390)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001d39398), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001d3939c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286236, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286236, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286236, loc:(*time.Location)(0x791c680)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286236, loc:(*time.Location)(0x791c680)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.9", PodIP:"10.244.2.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.4"}}, StartTime:(*v1.Time)(0xc0021ec820), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0021ec8a0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002629420)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://529492bcc556f95584a81e99648725532b82ff61d964c95ec8b1af77eef15495", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021ec8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0021ec840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc001d3941f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:11:28.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2724" for this suite. • [SLOW TEST:52.220 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":72,"skipped":1016,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:11:29.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8422050b-a365-45a1-b101-b47109c31859 STEP: Creating configMap with name cm-test-opt-upd-a31d3783-8d0a-470d-8f50-41c485a040b2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8422050b-a365-45a1-b101-b47109c31859 STEP: Updating configmap cm-test-opt-upd-a31d3783-8d0a-470d-8f50-41c485a040b2 STEP: Creating configMap with name cm-test-opt-create-2311c941-1c43-4334-ac9c-128841e83cc8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:12:51.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3070" for this suite. • [SLOW TEST:82.748 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:12:51.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:12:51.867: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa" in namespace "projected-568" to be "success or failure" Dec 23 02:12:51.876: INFO: Pod "downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 9.012546ms Dec 23 02:12:53.889: INFO: Pod "downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02171108s Dec 23 02:12:55.937: INFO: Pod "downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070371627s STEP: Saw pod success Dec 23 02:12:55.937: INFO: Pod "downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa" satisfied condition "success or failure" Dec 23 02:12:55.966: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa container client-container: STEP: delete the pod Dec 23 02:12:56.008: INFO: Waiting for pod downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa to disappear Dec 23 02:12:56.026: INFO: Pod downwardapi-volume-4a804ba6-deb4-49aa-9041-eafbc431e6fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:12:56.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-568" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1074,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:12:56.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:13:24.117: INFO: Container started at 2020-12-23 02:12:59 +0000 UTC, pod became ready at 2020-12-23 02:13:22 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:13:24.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3476" for this suite. • [SLOW TEST:28.092 seconds] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:13:24.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 02:13:24.945: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Dec 23 02:13:26.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286405, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286405, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286405, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286404, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 02:13:29.998: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:13:30.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-109" for this suite. STEP: Destroying namespace "webhook-109-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.190 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":76,"skipped":1164,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:13:30.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3ee344ed-9196-41d9-9cd7-344f5a5652ee STEP: Creating a pod to test consume secrets Dec 23 02:13:30.398: INFO: Waiting up to 5m0s for pod "pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077" in namespace "secrets-6206" to be "success or failure" Dec 23 02:13:30.405: INFO: Pod "pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077": Phase="Pending", Reason="", readiness=false. Elapsed: 6.262591ms Dec 23 02:13:32.409: INFO: Pod "pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010264692s Dec 23 02:13:34.482: INFO: Pod "pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083667912s STEP: Saw pod success Dec 23 02:13:34.482: INFO: Pod "pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077" satisfied condition "success or failure" Dec 23 02:13:34.485: INFO: Trying to get logs from node jerma-worker pod pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077 container secret-env-test: STEP: delete the pod Dec 23 02:13:34.525: INFO: Waiting for pod pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077 to disappear Dec 23 02:13:34.548: INFO: Pod pod-secrets-18a87c12-715a-4a58-9042-10e0d725a077 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:13:34.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6206" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:13:34.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-746 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 23 02:13:34.622: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 23 02:14:00.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.165:8080/dial?request=hostname&protocol=http&host=10.244.2.8&port=8080&tries=1'] Namespace:pod-network-test-746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 23 02:14:00.728: INFO: >>> kubeConfig: /root/.kube/config I1223 02:14:00.766892 6 log.go:172] (0xc001c9c210) (0xc0018b8be0) Create stream I1223 02:14:00.766949 6 log.go:172] (0xc001c9c210) (0xc0018b8be0) Stream added, broadcasting: 1 I1223 02:14:00.769064 6 log.go:172] (0xc001c9c210) Reply frame received for 1 I1223 02:14:00.769100 6 log.go:172] (0xc001c9c210) (0xc001fdc000) Create stream I1223 02:14:00.769112 6 log.go:172] (0xc001c9c210) (0xc001fdc000) Stream added, broadcasting: 3 I1223 02:14:00.770172 6 log.go:172] (0xc001c9c210) Reply frame received for 3 I1223 02:14:00.770225 6 log.go:172] (0xc001c9c210) (0xc001fdc0a0) Create stream I1223 02:14:00.770239 6 log.go:172] (0xc001c9c210) (0xc001fdc0a0) Stream added, broadcasting: 5 I1223 02:14:00.771044 6 log.go:172] (0xc001c9c210) Reply frame received for 5 I1223 02:14:00.928652 6 log.go:172] (0xc001c9c210) Data frame received for 3 I1223 02:14:00.928672 6 log.go:172] (0xc001fdc000) (3) Data frame handling I1223 02:14:00.928680 6 log.go:172] (0xc001fdc000) (3) Data frame sent I1223 02:14:00.929740 6 log.go:172] (0xc001c9c210) Data frame received for 5 I1223 02:14:00.929769 6 log.go:172] (0xc001fdc0a0) (5) Data frame handling I1223 02:14:00.930034 6 log.go:172] (0xc001c9c210) Data frame received for 3 I1223 02:14:00.930071 6 log.go:172] (0xc001fdc000) (3) Data frame handling I1223 02:14:00.931717 6 log.go:172] (0xc001c9c210) Data frame received for 1 I1223 02:14:00.931746 6 log.go:172] (0xc0018b8be0) (1) Data frame handling I1223 02:14:00.931762 6 log.go:172] (0xc0018b8be0) (1) Data frame sent I1223 02:14:00.931777 6 log.go:172] (0xc001c9c210) (0xc0018b8be0) Stream removed, broadcasting: 1 I1223 02:14:00.931801 6 log.go:172] (0xc001c9c210) Go away received I1223 02:14:00.931956 6 log.go:172] (0xc001c9c210) (0xc0018b8be0) Stream removed, broadcasting: 1 I1223 02:14:00.932005 6 log.go:172] (0xc001c9c210) (0xc001fdc000) Stream removed, broadcasting: 3 I1223 02:14:00.932036 6 log.go:172] (0xc001c9c210) (0xc001fdc0a0) Stream removed, broadcasting: 5 Dec 23 02:14:00.932: INFO: Waiting for responses: map[] Dec 23 02:14:00.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.165:8080/dial?request=hostname&protocol=http&host=10.244.1.164&port=8080&tries=1'] Namespace:pod-network-test-746 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 23 02:14:00.935: INFO: >>> kubeConfig: /root/.kube/config I1223 02:14:00.961346 6 log.go:172] (0xc002520d10) (0xc001fdc460) Create stream I1223 02:14:00.961372 6 log.go:172] (0xc002520d10) (0xc001fdc460) Stream added, broadcasting: 1 I1223 02:14:00.963607 6 log.go:172] (0xc002520d10) Reply frame received for 1 I1223 02:14:00.963641 6 log.go:172] (0xc002520d10) (0xc0023c4000) Create stream I1223 02:14:00.963649 6 log.go:172] (0xc002520d10) (0xc0023c4000) Stream added, broadcasting: 3 I1223 02:14:00.964419 6 log.go:172] (0xc002520d10) Reply frame received for 3 I1223 02:14:00.964439 6 log.go:172] (0xc002520d10) (0xc001fdc500) Create stream I1223 02:14:00.964445 6 log.go:172] (0xc002520d10) (0xc001fdc500) Stream added, broadcasting: 5 I1223 02:14:00.965422 6 log.go:172] (0xc002520d10) Reply frame received for 5 I1223 02:14:01.028050 6 log.go:172] (0xc002520d10) Data frame received for 3 I1223 02:14:01.028149 6 log.go:172] (0xc0023c4000) (3) Data frame handling I1223 02:14:01.028196 6 log.go:172] (0xc0023c4000) (3) Data frame sent I1223 02:14:01.028695 6 log.go:172] (0xc002520d10) Data frame received for 5 I1223 02:14:01.028731 6 log.go:172] (0xc001fdc500) (5) Data frame handling I1223 02:14:01.028749 6 log.go:172] (0xc002520d10) Data frame received for 3 I1223 02:14:01.028759 6 log.go:172] (0xc0023c4000) (3) Data frame handling I1223 02:14:01.030414 6 log.go:172] (0xc002520d10) Data frame received for 1 I1223 02:14:01.030439 6 log.go:172] (0xc001fdc460) (1) Data frame handling I1223 02:14:01.030456 6 log.go:172] (0xc001fdc460) (1) Data frame sent I1223 02:14:01.030565 6 log.go:172] (0xc002520d10) (0xc001fdc460) Stream removed, broadcasting: 1 I1223 02:14:01.030601 6 log.go:172] (0xc002520d10) Go away received I1223 02:14:01.030694 6 log.go:172] (0xc002520d10) (0xc001fdc460) Stream removed, broadcasting: 1 I1223 02:14:01.030727 6 log.go:172] (0xc002520d10) (0xc0023c4000) Stream removed, broadcasting: 3 I1223 02:14:01.030745 6 log.go:172] (0xc002520d10) (0xc001fdc500) Stream removed, broadcasting: 5 Dec 23 02:14:01.030: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:01.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-746" for this suite. • [SLOW TEST:26.494 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1206,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:01.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4617 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4617 STEP: creating replication controller externalsvc in namespace services-4617 I1223 02:14:01.245199 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4617, replica count: 2 I1223 02:14:04.295719 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1223 02:14:07.295977 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Dec 23 02:14:07.528: INFO: Creating new exec pod Dec 23 02:14:11.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4617 execpod6r6sh -- /bin/sh -x -c nslookup nodeport-service' Dec 23 02:14:11.779: INFO: stderr: "I1223 02:14:11.679486 859 log.go:172] (0xc000b473f0) (0xc000b32460) Create stream\nI1223 02:14:11.679541 859 log.go:172] (0xc000b473f0) (0xc000b32460) Stream added, broadcasting: 1\nI1223 02:14:11.694674 859 log.go:172] (0xc000b473f0) Reply frame received for 1\nI1223 02:14:11.694723 859 log.go:172] (0xc000b473f0) (0xc000b32500) Create stream\nI1223 02:14:11.694734 859 log.go:172] (0xc000b473f0) (0xc000b32500) Stream added, broadcasting: 3\nI1223 02:14:11.696505 859 log.go:172] (0xc000b473f0) Reply frame received for 3\nI1223 02:14:11.696549 859 log.go:172] (0xc000b473f0) (0xc000b325a0) Create stream\nI1223 02:14:11.696563 859 log.go:172] (0xc000b473f0) (0xc000b325a0) Stream added, broadcasting: 5\nI1223 02:14:11.698514 859 log.go:172] (0xc000b473f0) Reply frame received for 5\nI1223 02:14:11.760663 859 log.go:172] (0xc000b473f0) Data frame received for 5\nI1223 02:14:11.760689 859 log.go:172] (0xc000b325a0) (5) Data frame handling\nI1223 02:14:11.760704 859 log.go:172] (0xc000b325a0) (5) Data frame sent\n+ nslookup nodeport-service\nI1223 02:14:11.770009 859 log.go:172] (0xc000b473f0) Data frame received for 3\nI1223 02:14:11.770030 859 log.go:172] (0xc000b32500) (3) Data frame handling\nI1223 02:14:11.770046 859 log.go:172] (0xc000b32500) (3) Data frame sent\nI1223 02:14:11.770798 859 log.go:172] (0xc000b473f0) Data frame received for 3\nI1223 02:14:11.770837 859 log.go:172] (0xc000b32500) (3) Data frame handling\nI1223 02:14:11.770869 859 log.go:172] (0xc000b32500) (3) Data frame sent\nI1223 02:14:11.770994 859 log.go:172] (0xc000b473f0) Data frame received for 5\nI1223 02:14:11.771019 859 log.go:172] (0xc000b325a0) (5) Data frame handling\nI1223 02:14:11.771192 859 log.go:172] (0xc000b473f0) Data frame received for 3\nI1223 02:14:11.771207 859 log.go:172] (0xc000b32500) (3) Data frame handling\nI1223 02:14:11.772904 859 log.go:172] (0xc000b473f0) Data frame received for 1\nI1223 02:14:11.772925 859 log.go:172] (0xc000b32460) (1) Data frame handling\nI1223 02:14:11.772936 859 log.go:172] (0xc000b32460) (1) Data frame sent\nI1223 02:14:11.772953 859 log.go:172] (0xc000b473f0) (0xc000b32460) Stream removed, broadcasting: 1\nI1223 02:14:11.773208 859 log.go:172] (0xc000b473f0) (0xc000b32460) Stream removed, broadcasting: 1\nI1223 02:14:11.773227 859 log.go:172] (0xc000b473f0) (0xc000b32500) Stream removed, broadcasting: 3\nI1223 02:14:11.773237 859 log.go:172] (0xc000b473f0) (0xc000b325a0) Stream removed, broadcasting: 5\n" Dec 23 02:14:11.780: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4617.svc.cluster.local\tcanonical name = externalsvc.services-4617.svc.cluster.local.\nName:\texternalsvc.services-4617.svc.cluster.local\nAddress: 10.102.207.133\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4617, will wait for the garbage collector to delete the pods Dec 23 02:14:11.839: INFO: Deleting ReplicationController externalsvc took: 6.799781ms Dec 23 02:14:12.240: INFO: Terminating ReplicationController externalsvc pods took: 400.283533ms Dec 23 02:14:24.355: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:24.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4617" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.380 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":79,"skipped":1215,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:24.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:24.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-938" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":80,"skipped":1220,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:24.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:24.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-951" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":81,"skipped":1226,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:24.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:14:24.718: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027" in namespace "downward-api-1917" to be "success or failure" Dec 23 02:14:24.723: INFO: Pod "downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027": Phase="Pending", Reason="", readiness=false. Elapsed: 4.991381ms Dec 23 02:14:26.727: INFO: Pod "downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009046302s Dec 23 02:14:28.731: INFO: Pod "downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012444702s STEP: Saw pod success Dec 23 02:14:28.731: INFO: Pod "downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027" satisfied condition "success or failure" Dec 23 02:14:28.733: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027 container client-container: STEP: delete the pod Dec 23 02:14:28.831: INFO: Waiting for pod downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027 to disappear Dec 23 02:14:28.843: INFO: Pod downwardapi-volume-c8001cba-2c43-47f4-a89a-c70c98863027 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:28.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1917" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:28.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Dec 23 02:14:28.899: INFO: Waiting up to 5m0s for pod "pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a" in namespace "emptydir-4525" to be "success or failure" Dec 23 02:14:28.921: INFO: Pod "pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.111489ms Dec 23 02:14:30.937: INFO: Pod "pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037724427s Dec 23 02:14:32.940: INFO: Pod "pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041094273s STEP: Saw pod success Dec 23 02:14:32.941: INFO: Pod "pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a" satisfied condition "success or failure" Dec 23 02:14:32.943: INFO: Trying to get logs from node jerma-worker2 pod pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a container test-container: STEP: delete the pod Dec 23 02:14:33.012: INFO: Waiting for pod pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a to disappear Dec 23 02:14:33.023: INFO: Pod pod-0f7d387b-fbfd-4517-bfdb-22261902bf6a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:33.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4525" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1267,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:33.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-ckdd STEP: Creating a pod to test atomic-volume-subpath Dec 23 02:14:33.109: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ckdd" in namespace "subpath-8044" to be "success or failure" Dec 23 02:14:33.129: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 19.746616ms Dec 23 02:14:35.133: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023585383s Dec 23 02:14:37.137: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 4.027549999s Dec 23 02:14:39.140: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 6.030595332s Dec 23 02:14:41.144: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 8.034330755s Dec 23 02:14:43.148: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 10.038125414s Dec 23 02:14:45.152: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 12.042209371s Dec 23 02:14:47.156: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 14.046058727s Dec 23 02:14:49.159: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 16.049780642s Dec 23 02:14:51.164: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 18.05420308s Dec 23 02:14:53.168: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 20.05828262s Dec 23 02:14:55.172: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 22.062772311s Dec 23 02:14:57.177: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Running", Reason="", readiness=true. Elapsed: 24.067395098s Dec 23 02:14:59.183: INFO: Pod "pod-subpath-test-downwardapi-ckdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.073865692s STEP: Saw pod success Dec 23 02:14:59.183: INFO: Pod "pod-subpath-test-downwardapi-ckdd" satisfied condition "success or failure" Dec 23 02:14:59.186: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-ckdd container test-container-subpath-downwardapi-ckdd: STEP: delete the pod Dec 23 02:14:59.205: INFO: Waiting for pod pod-subpath-test-downwardapi-ckdd to disappear Dec 23 02:14:59.209: INFO: Pod pod-subpath-test-downwardapi-ckdd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-ckdd Dec 23 02:14:59.209: INFO: Deleting pod "pod-subpath-test-downwardapi-ckdd" in namespace "subpath-8044" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:14:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8044" for this suite. • [SLOW TEST:26.186 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":84,"skipped":1268,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:14:59.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-ba6682cd-b35c-4870-815f-de4dc9ac1383 STEP: Creating secret with name s-test-opt-upd-4ce59c1a-ea05-46ff-acf2-9497b2a6796f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ba6682cd-b35c-4870-815f-de4dc9ac1383 STEP: Updating secret s-test-opt-upd-4ce59c1a-ea05-46ff-acf2-9497b2a6796f STEP: Creating secret with name s-test-opt-create-5c0a0c4e-4592-4051-a762-232def95bff1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:15:07.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7431" for this suite. • [SLOW TEST:8.183 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:15:07.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-235/configmap-test-45a4902b-717a-4069-80c1-6e728ef32914 STEP: Creating a pod to test consume configMaps Dec 23 02:15:07.494: INFO: Waiting up to 5m0s for pod "pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063" in namespace "configmap-235" to be "success or failure" Dec 23 02:15:07.513: INFO: Pod "pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063": Phase="Pending", Reason="", readiness=false. Elapsed: 18.604608ms Dec 23 02:15:09.517: INFO: Pod "pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023084617s Dec 23 02:15:11.521: INFO: Pod "pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02652486s STEP: Saw pod success Dec 23 02:15:11.521: INFO: Pod "pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063" satisfied condition "success or failure" Dec 23 02:15:11.523: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063 container env-test: STEP: delete the pod Dec 23 02:15:11.717: INFO: Waiting for pod pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063 to disappear Dec 23 02:15:11.773: INFO: Pod pod-configmaps-b296024d-93c4-45f8-83b1-44e6d15ae063 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:15:11.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-235" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:15:11.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1282.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1282.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1282.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1282.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1282.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1282.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 23 02:15:17.994: INFO: DNS probes using dns-1282/dns-test-ab15caf0-9197-4edf-8692-0017d101e5bf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:15:18.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1282" for this suite. • [SLOW TEST:6.326 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":87,"skipped":1409,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:15:18.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Dec 23 02:15:19.240: INFO: Pod name wrapped-volume-race-a09b8b80-bfae-4cd7-b84f-7ad0ab74e015: Found 0 pods out of 5 Dec 23 02:15:24.252: INFO: Pod name wrapped-volume-race-a09b8b80-bfae-4cd7-b84f-7ad0ab74e015: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a09b8b80-bfae-4cd7-b84f-7ad0ab74e015 in namespace emptydir-wrapper-5429, will wait for the garbage collector to delete the pods Dec 23 02:15:38.342: INFO: Deleting ReplicationController wrapped-volume-race-a09b8b80-bfae-4cd7-b84f-7ad0ab74e015 took: 7.485705ms Dec 23 02:15:38.742: INFO: Terminating ReplicationController wrapped-volume-race-a09b8b80-bfae-4cd7-b84f-7ad0ab74e015 pods took: 400.299278ms STEP: Creating RC which spawns configmap-volume pods Dec 23 02:15:54.596: INFO: Pod name wrapped-volume-race-6d1b2cc4-ee16-40a9-baaf-86f5ce9e932f: Found 0 pods out of 5 Dec 23 02:15:59.603: INFO: Pod name wrapped-volume-race-6d1b2cc4-ee16-40a9-baaf-86f5ce9e932f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6d1b2cc4-ee16-40a9-baaf-86f5ce9e932f in namespace emptydir-wrapper-5429, will wait for the garbage collector to delete the pods Dec 23 02:16:13.714: INFO: Deleting ReplicationController wrapped-volume-race-6d1b2cc4-ee16-40a9-baaf-86f5ce9e932f took: 7.849576ms Dec 23 02:16:14.115: INFO: Terminating ReplicationController wrapped-volume-race-6d1b2cc4-ee16-40a9-baaf-86f5ce9e932f pods took: 400.258548ms STEP: Creating RC which spawns configmap-volume pods Dec 23 02:16:24.445: INFO: Pod name wrapped-volume-race-542cbeca-71ba-400b-ba53-5f8c96df924e: Found 0 pods out of 5 Dec 23 02:16:29.451: INFO: Pod name wrapped-volume-race-542cbeca-71ba-400b-ba53-5f8c96df924e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-542cbeca-71ba-400b-ba53-5f8c96df924e in namespace emptydir-wrapper-5429, will wait for the garbage collector to delete the pods Dec 23 02:16:44.031: INFO: Deleting ReplicationController wrapped-volume-race-542cbeca-71ba-400b-ba53-5f8c96df924e took: 12.099699ms Dec 23 02:16:44.431: INFO: Terminating ReplicationController wrapped-volume-race-542cbeca-71ba-400b-ba53-5f8c96df924e pods took: 400.282257ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:16:56.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5429" for this suite. • [SLOW TEST:97.916 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":88,"skipped":1413,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:16:56.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:16:56.098: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3" in namespace "downward-api-2609" to be "success or failure" Dec 23 02:16:56.116: INFO: Pod "downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.812595ms Dec 23 02:16:58.120: INFO: Pod "downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021783349s Dec 23 02:17:00.123: INFO: Pod "downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024997487s STEP: Saw pod success Dec 23 02:17:00.123: INFO: Pod "downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3" satisfied condition "success or failure" Dec 23 02:17:00.126: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3 container client-container: STEP: delete the pod Dec 23 02:17:00.158: INFO: Waiting for pod downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3 to disappear Dec 23 02:17:00.162: INFO: Pod downwardapi-volume-2353a2aa-fae2-438e-9069-1adfa8ffbed3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:00.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2609" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1420,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:00.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 23 02:17:00.788: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 23 02:17:03.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286620, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286620, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286620, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286620, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 23 02:17:06.098: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:17:06.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4724-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:07.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9354" for this suite. STEP: Destroying namespace "webhook-9354-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.293 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":90,"skipped":1421,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:07.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:17:07.531: INFO: Creating deployment "test-recreate-deployment" Dec 23 02:17:07.547: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Dec 23 02:17:07.571: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Dec 23 02:17:09.592: INFO: Waiting deployment "test-recreate-deployment" to complete Dec 23 02:17:09.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286627, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286627, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286627, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286627, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 23 02:17:11.611: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Dec 23 02:17:11.618: INFO: Updating deployment test-recreate-deployment Dec 23 02:17:11.618: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Dec 23 02:17:12.073: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1683 /apis/apps/v1/namespaces/deployment-1683/deployments/test-recreate-deployment d98d2da9-e8b1-4e03-8891-ae9857ad0bbe 23933042 2 2020-12-23 02:17:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0031217a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-12-23 02:17:11 +0000 UTC,LastTransitionTime:2020-12-23 02:17:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-12-23 02:17:11 +0000 UTC,LastTransitionTime:2020-12-23 02:17:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Dec 23 02:17:12.277: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1683 /apis/apps/v1/namespaces/deployment-1683/replicasets/test-recreate-deployment-5f94c574ff 73d53c6a-756a-4f9d-83fc-bf639efde937 23933039 1 2020-12-23 02:17:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d98d2da9-e8b1-4e03-8891-ae9857ad0bbe 0xc003121b37 0xc003121b38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003121b98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 23 02:17:12.277: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Dec 23 02:17:12.277: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1683 /apis/apps/v1/namespaces/deployment-1683/replicasets/test-recreate-deployment-799c574856 4864a972-9965-4366-a05d-8b5948e1e186 23933031 2 2020-12-23 02:17:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d98d2da9-e8b1-4e03-8891-ae9857ad0bbe 0xc003121c07 0xc003121c08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003121c78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Dec 23 02:17:12.301: INFO: Pod "test-recreate-deployment-5f94c574ff-fcbzx" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-fcbzx test-recreate-deployment-5f94c574ff- deployment-1683 /api/v1/namespaces/deployment-1683/pods/test-recreate-deployment-5f94c574ff-fcbzx ddd4132a-251c-4463-b8cb-5e5cdcf4cec0 23933044 0 2020-12-23 02:17:11 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 73d53c6a-756a-4f9d-83fc-bf639efde937 0xc00557e0d7 0xc00557e0d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5fjch,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5fjch,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5fjch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:17:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:17:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:17:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:17:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:12.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1683" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":91,"skipped":1430,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:12.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Dec 23 02:17:12.368: INFO: Waiting up to 5m0s for pod "pod-912a7f46-cb21-40aa-8f75-fb5d672c9356" in namespace "emptydir-2594" to be "success or failure" Dec 23 02:17:12.372: INFO: Pod "pod-912a7f46-cb21-40aa-8f75-fb5d672c9356": Phase="Pending", Reason="", readiness=false. Elapsed: 3.811922ms Dec 23 02:17:14.376: INFO: Pod "pod-912a7f46-cb21-40aa-8f75-fb5d672c9356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007409645s Dec 23 02:17:16.380: INFO: Pod "pod-912a7f46-cb21-40aa-8f75-fb5d672c9356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011136316s STEP: Saw pod success Dec 23 02:17:16.380: INFO: Pod "pod-912a7f46-cb21-40aa-8f75-fb5d672c9356" satisfied condition "success or failure" Dec 23 02:17:16.382: INFO: Trying to get logs from node jerma-worker pod pod-912a7f46-cb21-40aa-8f75-fb5d672c9356 container test-container: STEP: delete the pod Dec 23 02:17:16.426: INFO: Waiting for pod pod-912a7f46-cb21-40aa-8f75-fb5d672c9356 to disappear Dec 23 02:17:16.444: INFO: Pod pod-912a7f46-cb21-40aa-8f75-fb5d672c9356 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:16.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2594" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1450,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:16.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 02:17:21.104: INFO: Successfully updated pod "pod-update-a1582426-c6e9-473c-b6b2-b2144bb3e91b" STEP: verifying the updated pod is in kubernetes Dec 23 02:17:21.120: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:21.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5332" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1459,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:21.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Dec 23 02:17:21.214: INFO: Waiting up to 5m0s for pod "var-expansion-85895155-185b-43b1-8b11-6af86ed90f20" in namespace "var-expansion-6264" to be "success or failure" Dec 23 02:17:21.245: INFO: Pod "var-expansion-85895155-185b-43b1-8b11-6af86ed90f20": Phase="Pending", Reason="", readiness=false. Elapsed: 31.003517ms Dec 23 02:17:23.249: INFO: Pod "var-expansion-85895155-185b-43b1-8b11-6af86ed90f20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035123935s Dec 23 02:17:25.253: INFO: Pod "var-expansion-85895155-185b-43b1-8b11-6af86ed90f20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039497677s STEP: Saw pod success Dec 23 02:17:25.253: INFO: Pod "var-expansion-85895155-185b-43b1-8b11-6af86ed90f20" satisfied condition "success or failure" Dec 23 02:17:25.256: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-85895155-185b-43b1-8b11-6af86ed90f20 container dapi-container: STEP: delete the pod Dec 23 02:17:25.289: INFO: Waiting for pod var-expansion-85895155-185b-43b1-8b11-6af86ed90f20 to disappear Dec 23 02:17:25.296: INFO: Pod var-expansion-85895155-185b-43b1-8b11-6af86ed90f20 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:25.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6264" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1459,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:25.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Dec 23 02:17:25.451: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:25.456: INFO: Number of nodes with available pods: 0 Dec 23 02:17:25.456: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:26.492: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:26.495: INFO: Number of nodes with available pods: 0 Dec 23 02:17:26.495: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:27.461: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:27.464: INFO: Number of nodes with available pods: 0 Dec 23 02:17:27.464: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:28.547: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:28.553: INFO: Number of nodes with available pods: 0 Dec 23 02:17:28.553: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:29.480: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:29.483: INFO: Number of nodes with available pods: 0 Dec 23 02:17:29.483: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:30.463: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:30.469: INFO: Number of nodes with available pods: 2 Dec 23 02:17:30.469: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Dec 23 02:17:30.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:30.593: INFO: Number of nodes with available pods: 1 Dec 23 02:17:30.593: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:31.597: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:31.601: INFO: Number of nodes with available pods: 1 Dec 23 02:17:31.601: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:32.769: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:32.772: INFO: Number of nodes with available pods: 1 Dec 23 02:17:32.772: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:33.598: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:33.603: INFO: Number of nodes with available pods: 1 Dec 23 02:17:33.603: INFO: Node jerma-worker is running more than one daemon pod Dec 23 02:17:34.608: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Dec 23 02:17:34.611: INFO: Number of nodes with available pods: 2 Dec 23 02:17:34.611: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-508, will wait for the garbage collector to delete the pods Dec 23 02:17:34.673: INFO: Deleting DaemonSet.extensions daemon-set took: 6.884394ms Dec 23 02:17:35.074: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.338666ms Dec 23 02:17:44.389: INFO: Number of nodes with available pods: 0 Dec 23 02:17:44.389: INFO: Number of running nodes: 0, number of available pods: 0 Dec 23 02:17:44.392: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-508/daemonsets","resourceVersion":"23933309"},"items":null} Dec 23 02:17:44.394: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-508/pods","resourceVersion":"23933309"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:44.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-508" for this suite. • [SLOW TEST:19.106 seconds] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":95,"skipped":1474,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:44.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Dec 23 02:17:44.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c" in namespace "projected-5685" to be "success or failure" Dec 23 02:17:44.471: INFO: Pod "downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.877755ms Dec 23 02:17:46.476: INFO: Pod "downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009335686s Dec 23 02:17:48.479: INFO: Pod "downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013063104s STEP: Saw pod success Dec 23 02:17:48.479: INFO: Pod "downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c" satisfied condition "success or failure" Dec 23 02:17:48.482: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c container client-container: STEP: delete the pod Dec 23 02:17:48.512: INFO: Waiting for pod downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c to disappear Dec 23 02:17:48.518: INFO: Pod downwardapi-volume-6827cc58-7bed-4c52-8b4c-78f7cfbcc64c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:48.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5685" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1488,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:48.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-aebcd692-1dfb-45d5-8886-d48eb2a0527a STEP: Creating a pod to test consume configMaps Dec 23 02:17:48.621: INFO: Waiting up to 5m0s for pod "pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb" in namespace "configmap-3060" to be "success or failure" Dec 23 02:17:48.626: INFO: Pod "pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193326ms Dec 23 02:17:50.630: INFO: Pod "pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008489541s Dec 23 02:17:52.716: INFO: Pod "pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094306507s STEP: Saw pod success Dec 23 02:17:52.716: INFO: Pod "pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb" satisfied condition "success or failure" Dec 23 02:17:52.718: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb container configmap-volume-test: STEP: delete the pod Dec 23 02:17:52.739: INFO: Waiting for pod pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb to disappear Dec 23 02:17:52.744: INFO: Pod pod-configmaps-40d67d56-5def-441e-a4d1-8556e6a77ebb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:52.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3060" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1492,"failed":0} S ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:52.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Dec 23 02:17:57.357: INFO: Successfully updated pod "pod-update-activedeadlineseconds-fe74356f-b86e-4f88-9562-e89761337cb1" Dec 23 02:17:57.358: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-fe74356f-b86e-4f88-9562-e89761337cb1" in namespace "pods-5391" to be "terminated due to deadline exceeded" Dec 23 02:17:57.361: INFO: Pod "pod-update-activedeadlineseconds-fe74356f-b86e-4f88-9562-e89761337cb1": Phase="Running", Reason="", readiness=true. Elapsed: 3.856002ms Dec 23 02:17:59.365: INFO: Pod "pod-update-activedeadlineseconds-fe74356f-b86e-4f88-9562-e89761337cb1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.007308837s Dec 23 02:17:59.365: INFO: Pod "pod-update-activedeadlineseconds-fe74356f-b86e-4f88-9562-e89761337cb1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 23 02:17:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5391" for this suite. • [SLOW TEST:6.624 seconds] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1493,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 23 02:17:59.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Dec 23 02:17:59.444: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap that has name configmap-test-emptyKey-1c52e227-6297-48a8-916e-7c8df62335f4
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:17:59.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7240" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":100,"skipped":1497,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:17:59.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
Dec 23 02:18:00.263: INFO: created pod pod-service-account-defaultsa
Dec 23 02:18:00.263: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 23 02:18:00.273: INFO: created pod pod-service-account-mountsa
Dec 23 02:18:00.273: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 23 02:18:00.298: INFO: created pod pod-service-account-nomountsa
Dec 23 02:18:00.298: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 23 02:18:00.319: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 23 02:18:00.319: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 23 02:18:00.384: INFO: created pod pod-service-account-mountsa-mountspec
Dec 23 02:18:00.384: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 23 02:18:00.414: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 23 02:18:00.414: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 23 02:18:00.445: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 23 02:18:00.445: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 23 02:18:00.475: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 23 02:18:00.475: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 23 02:18:00.528: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 23 02:18:00.528: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:00.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9685" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":101,"skipped":1544,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:00.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:18:01.174: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:18:03.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:18:05.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:18:07.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:18:09.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:18:11.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286681, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:18:14.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:26.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2181" for this suite.
STEP: Destroying namespace "webhook-2181-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:26.247 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":102,"skipped":1573,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:26.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Dec 23 02:18:33.497: INFO: Successfully updated pod "annotationupdate9e5ace44-57ef-4a23-80cf-ca65956face3"
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:35.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-379" for this suite.

• [SLOW TEST:8.735 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1578,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:35.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:185
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:35.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6630" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":104,"skipped":1591,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:35.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 23 02:18:45.940: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 02:18:45.962: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 02:18:47.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 02:18:47.966: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 02:18:49.963: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 02:18:49.990: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 02:18:51.962: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 02:18:51.967: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 23 02:18:53.963: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 23 02:18:53.967: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:53.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-593" for this suite.

• [SLOW TEST:18.173 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when create a pod with lifecycle hook
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1594,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:53.984: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override all
Dec 23 02:18:54.069: INFO: Waiting up to 5m0s for pod "client-containers-a982b70c-84a0-427c-bfa0-a15e32889291" in namespace "containers-4550" to be "success or failure"
Dec 23 02:18:54.096: INFO: Pod "client-containers-a982b70c-84a0-427c-bfa0-a15e32889291": Phase="Pending", Reason="", readiness=false. Elapsed: 26.534336ms
Dec 23 02:18:56.099: INFO: Pod "client-containers-a982b70c-84a0-427c-bfa0-a15e32889291": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029835488s
Dec 23 02:18:58.102: INFO: Pod "client-containers-a982b70c-84a0-427c-bfa0-a15e32889291": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032762037s
STEP: Saw pod success
Dec 23 02:18:58.102: INFO: Pod "client-containers-a982b70c-84a0-427c-bfa0-a15e32889291" satisfied condition "success or failure"
Dec 23 02:18:58.105: INFO: Trying to get logs from node jerma-worker pod client-containers-a982b70c-84a0-427c-bfa0-a15e32889291 container test-container: 
STEP: delete the pod
Dec 23 02:18:58.136: INFO: Waiting for pod client-containers-a982b70c-84a0-427c-bfa0-a15e32889291 to disappear
Dec 23 02:18:58.147: INFO: Pod client-containers-a982b70c-84a0-427c-bfa0-a15e32889291 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:18:58.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4550" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1594,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:18:58.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:18:58.240: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:19:02.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2496" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1632,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:19:02.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-map-24d42311-a31f-44d9-97c8-bbb3f7553d0c
STEP: Creating a pod to test consume configMaps
Dec 23 02:19:02.497: INFO: Waiting up to 5m0s for pod "pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378" in namespace "configmap-1058" to be "success or failure"
Dec 23 02:19:02.501: INFO: Pod "pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378": Phase="Pending", Reason="", readiness=false. Elapsed: 4.399254ms
Dec 23 02:19:04.505: INFO: Pod "pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008307452s
Dec 23 02:19:06.509: INFO: Pod "pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012210362s
STEP: Saw pod success
Dec 23 02:19:06.509: INFO: Pod "pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378" satisfied condition "success or failure"
Dec 23 02:19:06.512: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378 container configmap-volume-test: 
STEP: delete the pod
Dec 23 02:19:06.547: INFO: Waiting for pod pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378 to disappear
Dec 23 02:19:06.550: INFO: Pod pod-configmaps-b89d40da-2c97-4564-b10b-f60e712c6378 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:19:06.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1058" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1677,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:19:06.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-p5mv
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 02:19:06.745: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-p5mv" in namespace "subpath-1031" to be "success or failure"
Dec 23 02:19:06.749: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657918ms
Dec 23 02:19:08.753: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007546486s
Dec 23 02:19:10.768: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 4.02273105s
Dec 23 02:19:12.771: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 6.025448785s
Dec 23 02:19:14.774: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 8.029096802s
Dec 23 02:19:16.799: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 10.053190312s
Dec 23 02:19:18.802: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 12.057007871s
Dec 23 02:19:20.809: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 14.064008724s
Dec 23 02:19:22.814: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 16.068484962s
Dec 23 02:19:24.818: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 18.072821725s
Dec 23 02:19:26.822: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 20.07681651s
Dec 23 02:19:28.827: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Running", Reason="", readiness=true. Elapsed: 22.081279497s
Dec 23 02:19:30.830: INFO: Pod "pod-subpath-test-configmap-p5mv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084989723s
STEP: Saw pod success
Dec 23 02:19:30.830: INFO: Pod "pod-subpath-test-configmap-p5mv" satisfied condition "success or failure"
Dec 23 02:19:30.834: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-p5mv container test-container-subpath-configmap-p5mv: 
STEP: delete the pod
Dec 23 02:19:30.873: INFO: Waiting for pod pod-subpath-test-configmap-p5mv to disappear
Dec 23 02:19:30.890: INFO: Pod pod-subpath-test-configmap-p5mv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-p5mv
Dec 23 02:19:30.890: INFO: Deleting pod "pod-subpath-test-configmap-p5mv" in namespace "subpath-1031"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:19:30.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1031" for this suite.

• [SLOW TEST:24.339 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":109,"skipped":1707,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:19:30.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:19:42.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-610" for this suite.

• [SLOW TEST:11.255 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":110,"skipped":1745,"failed":0}
SSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:19:42.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:19:58.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4723" for this suite.

• [SLOW TEST:16.112 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":111,"skipped":1749,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:19:58.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:20:14.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2550" for this suite.

• [SLOW TEST:16.207 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":112,"skipped":1781,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:20:14.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 23 02:20:22.206: INFO: 9 pods remaining
Dec 23 02:20:22.206: INFO: 0 pods has nil DeletionTimestamp
Dec 23 02:20:22.206: INFO: 
Dec 23 02:20:22.724: INFO: 0 pods remaining
Dec 23 02:20:22.724: INFO: 0 pods has nil DeletionTimestamp
Dec 23 02:20:22.724: INFO: 
STEP: Gathering metrics
W1223 02:20:23.894554       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 02:20:23.894: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:20:23.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7761" for this suite.

• [SLOW TEST:10.097 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":113,"skipped":1831,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:20:24.573: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:20:24.709: INFO: Creating deployment "webserver-deployment"
Dec 23 02:20:24.979: INFO: Waiting for observed generation 1
Dec 23 02:20:27.159: INFO: Waiting for all required pods to come up
Dec 23 02:20:27.163: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 23 02:20:39.174: INFO: Waiting for deployment "webserver-deployment" to complete
Dec 23 02:20:39.180: INFO: Updating deployment "webserver-deployment" with a non-existent image
Dec 23 02:20:39.185: INFO: Updating deployment webserver-deployment
Dec 23 02:20:39.185: INFO: Waiting for observed generation 2
Dec 23 02:20:41.459: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 23 02:20:41.461: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 23 02:20:41.520: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Dec 23 02:20:41.632: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 23 02:20:41.632: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 23 02:20:41.634: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Dec 23 02:20:41.638: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Dec 23 02:20:41.638: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Dec 23 02:20:41.642: INFO: Updating deployment webserver-deployment
Dec 23 02:20:41.642: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Dec 23 02:20:41.841: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 23 02:20:41.919: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 23 02:20:42.415: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8849 /apis/apps/v1/namespaces/deployment-8849/deployments/webserver-deployment f935d699-4104-44e9-8be6-d82703e6424f 23934756 3 2020-12-23 02:20:24 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056a3a68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-12-23 02:20:39 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-12-23 02:20:41 +0000 UTC,LastTransitionTime:2020-12-23 02:20:41 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Dec 23 02:20:42.555: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-8849 /apis/apps/v1/namespaces/deployment-8849/replicasets/webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 23934800 3 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment f935d699-4104-44e9-8be6-d82703e6424f 0xc005645af7 0xc005645af8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005645b68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:20:42.555: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Dec 23 02:20:42.555: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-8849 /apis/apps/v1/namespaces/deployment-8849/replicasets/webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 23934774 3 2020-12-23 02:20:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment f935d699-4104-44e9-8be6-d82703e6424f 0xc005645a27 0xc005645a28}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005645a98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:20:42.621: INFO: Pod "webserver-deployment-595b5b9587-488gj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-488gj webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-488gj b4750b89-f3a4-4f88-86fd-7a13cf1a536a 23934813 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d900a7 0xc002d900a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-12-23 02:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.621: INFO: Pod "webserver-deployment-595b5b9587-5hz8z" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5hz8z webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-5hz8z ef3b6fa5-7eaa-4787-9e25-f81c5e2fe071 23934589 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90207 0xc002d90208}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.193,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ccebb31babe9f8aa3fc6f6876d6cf354e51687d0902c40a74088f036fb76dd9c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.621: INFO: Pod "webserver-deployment-595b5b9587-5qstf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5qstf webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-5qstf b42b9165-af52-4676-9d3a-51170d3e9478 23934581 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90387 0xc002d90388}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.53,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2d61cc4cf490cdff54088aaad8c817383f33f4b6a0bdd98ed0eca1e3eed3364b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.53,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-7mvlr" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7mvlr webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-7mvlr 9e9c44fe-7ae6-4a83-8d27-44e957551456 23934626 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90507 0xc002d90508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.56,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://859bcb9dd832662a7cdf81f415b6cf8a4ff013e8f0c96f58a218ac1462bc18ed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-7q86s" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7q86s webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-7q86s 5a0b6672-25cc-4162-88da-8d9f9616ffe9 23934633 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90697 0xc002d90698}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.195,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0d2ff37bde4a4142b1439e4513d1488e04acf2de3137c22e76a3a13d52b41d40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.195,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-9t7n8" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-9t7n8 webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-9t7n8 6275a138-a563-4b68-a450-db5a42f90050 23934617 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90817 0xc002d90818}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.194,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d8a01f48dbd4bc5b9c1e3b0d2239bf6205cfdb5239c8c14c9512fb8042ab38e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-dt46j" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dt46j webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-dt46j 54e2d6d4-a346-48cc-ae7d-37059e6eb8da 23934611 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90997 0xc002d90998}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.54,StartTime:2020-12-23 02:20:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3cc05db07dd3268546dc9e1157a26d4d25b4e7daade691a459a4dd56ee2179ec,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.54,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-fc8hp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fc8hp webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-fc8hp ef1b21e6-3e50-44a9-9a45-3abd9f2e1ad3 23934762 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90b17 0xc002d90b18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.622: INFO: Pod "webserver-deployment-595b5b9587-h98n7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h98n7 webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-h98n7 6dd936f1-4e02-4998-b8ed-d3197006b02e 23934618 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90c37 0xc002d90c38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.55,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42dbf35c525abb863bd0ab6f681b40af963edd41e9a68a6090d31bb89228893d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-k2zcf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-k2zcf webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-k2zcf 4676f577-ae92-4a8e-8e40-c160224a47f5 23934766 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90db7 0xc002d90db8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-lj8vb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-lj8vb webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-lj8vb 19e63903-9d2e-444e-afd1-4fe17a688d13 23934755 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90ed7 0xc002d90ed8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-mhctq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mhctq webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-mhctq 1ea9d8c8-0d15-4bac-b521-c2c9e4d18163 23934791 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d90ff7 0xc002d90ff8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-ncbwp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ncbwp webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-ncbwp 59fa95e0-c554-4184-a929-7120dc0b2079 23934792 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d91117 0xc002d91118}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-ndztg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ndztg webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-ndztg c287a263-9a50-4f06-9fa0-59e291af7060 23934815 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d91247 0xc002d91248}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.623: INFO: Pod "webserver-deployment-595b5b9587-nnb84" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nnb84 webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-nnb84 6970f197-57a1-4581-a79c-eb9c308649df 23934789 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d913c7 0xc002d913c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.624: INFO: Pod "webserver-deployment-595b5b9587-ql8ff" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ql8ff webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-ql8ff c6d8cb47-ad04-462f-bb7a-b2c005a0cbec 23934550 0 2020-12-23 02:20:25 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d914e7 0xc002d914e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.52,StartTime:2020-12-23 02:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:20:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ebed815b8e463bddd82703e57aa753fe503515c76831b62a92db09b67ef6ead2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.624: INFO: Pod "webserver-deployment-595b5b9587-s6wdb" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-s6wdb webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-s6wdb 2a3435cc-5437-49f2-80b7-4732cebb92db 23934793 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d91667 0xc002d91668}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.624: INFO: Pod "webserver-deployment-595b5b9587-t6gjp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-t6gjp webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-t6gjp d02d4996-d360-4acb-a4a5-a24110b419a0 23934794 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d917b7 0xc002d917b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.624: INFO: Pod "webserver-deployment-595b5b9587-zhjrw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zhjrw webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-zhjrw 22e7bee7-6cf9-4266-acf2-d5cd61953d29 23934788 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d918e7 0xc002d918e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:20:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.624: INFO: Pod "webserver-deployment-595b5b9587-zrzvx" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zrzvx webserver-deployment-595b5b9587- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-595b5b9587-zrzvx a3044d48-6420-4304-9288-2d81185f1574 23934764 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 85af2f0d-8d99-4516-a00e-c42af5019b88 0xc002d91a47 0xc002d91a48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.625: INFO: Pod "webserver-deployment-c7997dcc8-4lnfh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4lnfh webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-4lnfh a0bf71d4-cc67-42d2-986a-f527bfc92ce8 23934773 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002d91b67 0xc002d91b68}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.625: INFO: Pod "webserver-deployment-c7997dcc8-578f2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-578f2 webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-578f2 c4185aa4-ad79-48f4-a9f3-ca4aa5a8a8f4 23934701 0 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002d91c97 0xc002d91c98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-12-23 02:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.625: INFO: Pod "webserver-deployment-c7997dcc8-8jwkf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8jwkf webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-8jwkf 5a95e3c9-add5-421f-9f87-451c508bca38 23934787 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002d91e17 0xc002d91e18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.625: INFO: Pod "webserver-deployment-c7997dcc8-9n9lf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9n9lf webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-9n9lf 468253cc-2c51-4dde-9c19-18183230040b 23934802 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002d91f47 0xc002d91f48}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.625: INFO: Pod "webserver-deployment-c7997dcc8-cptch" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cptch webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-cptch 0a938283-5e49-4957-a5ad-6e75ab7ca94a 23934798 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c340b7 0xc002c340b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.626: INFO: Pod "webserver-deployment-c7997dcc8-d5df5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d5df5 webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-d5df5 a49e79e2-2477-48e9-9da8-f67a8498d157 23934799 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c34f27 0xc002c34f28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.626: INFO: Pod "webserver-deployment-c7997dcc8-gzxjp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gzxjp webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-gzxjp bf14c986-f120-4be3-9405-b1f4ffaff18b 23934759 0 2020-12-23 02:20:41 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35057 0xc002c35058}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.626: INFO: Pod "webserver-deployment-c7997dcc8-k49d8" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k49d8 webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-k49d8 ca5a3fbb-9064-4af2-979b-207d2c76950f 23934715 0 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35187 0xc002c35188}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.626: INFO: Pod "webserver-deployment-c7997dcc8-lmp7s" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lmp7s webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-lmp7s ab9b0ef5-7b96-42b9-affd-859aeae724c3 23934796 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35307 0xc002c35308}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.627: INFO: Pod "webserver-deployment-c7997dcc8-lvksp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lvksp webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-lvksp 227ff1eb-23ca-4d9c-af3b-20e8036c58a6 23934731 0 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35437 0xc002c35438}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.627: INFO: Pod "webserver-deployment-c7997dcc8-mp5k9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mp5k9 webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-mp5k9 67f50d60-0c03-423f-b6d9-2a3d566141ea 23934703 0 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c355b7 0xc002c355b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:,StartTime:2020-12-23 02:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.627: INFO: Pod "webserver-deployment-c7997dcc8-xjh57" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xjh57 webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-xjh57 33fe20d3-ac82-43ca-9c98-370b43a2b207 23934795 0 2020-12-23 02:20:42 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35737 0xc002c35738}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 23 02:20:42.627: INFO: Pod "webserver-deployment-c7997dcc8-xx55x" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xx55x webserver-deployment-c7997dcc8- deployment-8849 /api/v1/namespaces/deployment-8849/pods/webserver-deployment-c7997dcc8-xx55x c804b01f-6820-4d29-8e8d-f87b79a8949e 23934730 0 2020-12-23 02:20:39 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5daf98f6-7283-494b-9296-16f2ccfe2340 0xc002c35867 0xc002c35868}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jlhcj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jlhcj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jlhcj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:20:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-12-23 02:20:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:20:42.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8849" for this suite.

• [SLOW TEST:18.180 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":114,"skipped":1862,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:20:42.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 02:21:05.683: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:05.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9836" for this suite.

• [SLOW TEST:23.120 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1888,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:05.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 23 02:21:05.980: INFO: Waiting up to 5m0s for pod "pod-c50f825b-2f30-4108-8fef-7850cf9d71d8" in namespace "emptydir-1525" to be "success or failure"
Dec 23 02:21:05.994: INFO: Pod "pod-c50f825b-2f30-4108-8fef-7850cf9d71d8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.534302ms
Dec 23 02:21:07.998: INFO: Pod "pod-c50f825b-2f30-4108-8fef-7850cf9d71d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018053833s
Dec 23 02:21:10.003: INFO: Pod "pod-c50f825b-2f30-4108-8fef-7850cf9d71d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022364632s
STEP: Saw pod success
Dec 23 02:21:10.003: INFO: Pod "pod-c50f825b-2f30-4108-8fef-7850cf9d71d8" satisfied condition "success or failure"
Dec 23 02:21:10.006: INFO: Trying to get logs from node jerma-worker pod pod-c50f825b-2f30-4108-8fef-7850cf9d71d8 container test-container: 
STEP: delete the pod
Dec 23 02:21:10.050: INFO: Waiting for pod pod-c50f825b-2f30-4108-8fef-7850cf9d71d8 to disappear
Dec 23 02:21:10.055: INFO: Pod pod-c50f825b-2f30-4108-8fef-7850cf9d71d8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:10.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1525" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1905,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:10.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:21:10.505: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:21:12.518: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286870, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286870, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286870, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286870, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:21:15.604: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:16.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6259" for this suite.
STEP: Destroying namespace "webhook-6259-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.165 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":117,"skipped":1910,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:16.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:29.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2387" for this suite.

• [SLOW TEST:13.153 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":118,"skipped":1922,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:29.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Dec 23 02:21:33.969: INFO: Successfully updated pod "annotationupdate58b0d1f1-4bbf-4baa-b070-7068506f7816"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9925" for this suite.

• [SLOW TEST:6.624 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1946,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:36.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:40.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4601" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":120,"skipped":1953,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:40.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-161b7a67-da51-4aa4-ae3c-2b583c1c83a4
STEP: Creating a pod to test consume secrets
Dec 23 02:21:41.407: INFO: Waiting up to 5m0s for pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56" in namespace "secrets-1196" to be "success or failure"
Dec 23 02:21:41.413: INFO: Pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258319ms
Dec 23 02:21:43.651: INFO: Pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244379472s
Dec 23 02:21:45.675: INFO: Pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268116874s
Dec 23 02:21:47.683: INFO: Pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.275642744s
STEP: Saw pod success
Dec 23 02:21:47.683: INFO: Pod "pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56" satisfied condition "success or failure"
Dec 23 02:21:47.685: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56 container secret-volume-test: 
STEP: delete the pod
Dec 23 02:21:47.720: INFO: Waiting for pod pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56 to disappear
Dec 23 02:21:47.724: INFO: Pod pod-secrets-6800b21c-841d-465a-90f2-e8e37a4aad56 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:47.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1196" for this suite.

• [SLOW TEST:7.420 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1965,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:47.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:21:48.360: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:21:50.371: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286908, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286908, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286908, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744286908, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:21:53.411: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:53.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2205" for this suite.
STEP: Destroying namespace "webhook-2205-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.965 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":122,"skipped":1965,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:53.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:21:57.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7974" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":1966,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:21:57.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Dec 23 02:21:57.869: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 02:22:00.811: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:22:11.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5037" for this suite.

• [SLOW TEST:13.707 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":124,"skipped":1973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:22:11.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 02:22:11.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966" in namespace "projected-9328" to be "success or failure"
Dec 23 02:22:11.688: INFO: Pod "downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966": Phase="Pending", Reason="", readiness=false. Elapsed: 49.808239ms
Dec 23 02:22:13.690: INFO: Pod "downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052506438s
Dec 23 02:22:15.694: INFO: Pod "downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056200935s
STEP: Saw pod success
Dec 23 02:22:15.694: INFO: Pod "downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966" satisfied condition "success or failure"
Dec 23 02:22:15.697: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966 container client-container: 
STEP: delete the pod
Dec 23 02:22:15.762: INFO: Waiting for pod downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966 to disappear
Dec 23 02:22:15.773: INFO: Pod downwardapi-volume-3c045133-9c3a-4d65-9d71-5a2d26f73966 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:22:15.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9328" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2024,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:22:15.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-5e83db7d-19fe-49df-bf37-da605040561b
STEP: Creating a pod to test consume secrets
Dec 23 02:22:15.847: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222" in namespace "projected-2506" to be "success or failure"
Dec 23 02:22:15.897: INFO: Pod "pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222": Phase="Pending", Reason="", readiness=false. Elapsed: 49.650112ms
Dec 23 02:22:17.901: INFO: Pod "pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053908717s
Dec 23 02:22:19.910: INFO: Pod "pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062430351s
STEP: Saw pod success
Dec 23 02:22:19.910: INFO: Pod "pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222" satisfied condition "success or failure"
Dec 23 02:22:19.912: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 02:22:19.927: INFO: Waiting for pod pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222 to disappear
Dec 23 02:22:19.962: INFO: Pod pod-projected-secrets-ea2c69ca-4707-4659-b38b-15c0f81a8222 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:22:19.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2506" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2027,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:22:19.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-bb1b3e78-e3a1-42fd-986e-76451bf89e85
STEP: Creating a pod to test consume configMaps
Dec 23 02:22:20.109: INFO: Waiting up to 5m0s for pod "pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282" in namespace "configmap-354" to be "success or failure"
Dec 23 02:22:20.166: INFO: Pod "pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282": Phase="Pending", Reason="", readiness=false. Elapsed: 56.843677ms
Dec 23 02:22:22.178: INFO: Pod "pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069634654s
Dec 23 02:22:24.181: INFO: Pod "pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072592671s
STEP: Saw pod success
Dec 23 02:22:24.181: INFO: Pod "pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282" satisfied condition "success or failure"
Dec 23 02:22:24.184: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282 container configmap-volume-test: 
STEP: delete the pod
Dec 23 02:22:24.269: INFO: Waiting for pod pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282 to disappear
Dec 23 02:22:24.272: INFO: Pod pod-configmaps-616d5087-a642-4246-b9a9-f39519cbe282 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:22:24.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-354" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2054,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:22:24.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1223 02:22:34.506096       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 23 02:22:34.506: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:22:34.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4403" for this suite.

• [SLOW TEST:10.213 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":128,"skipped":2059,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:22:34.519: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-113
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 02:22:34.611: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 02:23:04.696: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.75 8081 | grep -v '^\s*$'] Namespace:pod-network-test-113 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:23:04.696: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:23:04.726624       6 log.go:172] (0xc005334790) (0xc001c05900) Create stream
I1223 02:23:04.726656       6 log.go:172] (0xc005334790) (0xc001c05900) Stream added, broadcasting: 1
I1223 02:23:04.728334       6 log.go:172] (0xc005334790) Reply frame received for 1
I1223 02:23:04.728373       6 log.go:172] (0xc005334790) (0xc0023c5cc0) Create stream
I1223 02:23:04.728387       6 log.go:172] (0xc005334790) (0xc0023c5cc0) Stream added, broadcasting: 3
I1223 02:23:04.729357       6 log.go:172] (0xc005334790) Reply frame received for 3
I1223 02:23:04.729388       6 log.go:172] (0xc005334790) (0xc002970f00) Create stream
I1223 02:23:04.729397       6 log.go:172] (0xc005334790) (0xc002970f00) Stream added, broadcasting: 5
I1223 02:23:04.730331       6 log.go:172] (0xc005334790) Reply frame received for 5
I1223 02:23:05.815619       6 log.go:172] (0xc005334790) Data frame received for 3
I1223 02:23:05.815665       6 log.go:172] (0xc0023c5cc0) (3) Data frame handling
I1223 02:23:05.815685       6 log.go:172] (0xc0023c5cc0) (3) Data frame sent
I1223 02:23:05.815704       6 log.go:172] (0xc005334790) Data frame received for 3
I1223 02:23:05.815721       6 log.go:172] (0xc0023c5cc0) (3) Data frame handling
I1223 02:23:05.815739       6 log.go:172] (0xc005334790) Data frame received for 5
I1223 02:23:05.815754       6 log.go:172] (0xc002970f00) (5) Data frame handling
I1223 02:23:05.819167       6 log.go:172] (0xc005334790) Data frame received for 1
I1223 02:23:05.819195       6 log.go:172] (0xc001c05900) (1) Data frame handling
I1223 02:23:05.819211       6 log.go:172] (0xc001c05900) (1) Data frame sent
I1223 02:23:05.819229       6 log.go:172] (0xc005334790) (0xc001c05900) Stream removed, broadcasting: 1
I1223 02:23:05.819253       6 log.go:172] (0xc005334790) Go away received
I1223 02:23:05.819486       6 log.go:172] (0xc005334790) (0xc001c05900) Stream removed, broadcasting: 1
I1223 02:23:05.819502       6 log.go:172] (0xc005334790) (0xc0023c5cc0) Stream removed, broadcasting: 3
I1223 02:23:05.819517       6 log.go:172] (0xc005334790) (0xc002970f00) Stream removed, broadcasting: 5
Dec 23 02:23:05.819: INFO: Found all expected endpoints: [netserver-0]
Dec 23 02:23:05.822: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.219 8081 | grep -v '^\s*$'] Namespace:pod-network-test-113 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:23:05.822: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:23:05.855204       6 log.go:172] (0xc004ae53f0) (0xc0028ea140) Create stream
I1223 02:23:05.855232       6 log.go:172] (0xc004ae53f0) (0xc0028ea140) Stream added, broadcasting: 1
I1223 02:23:05.857802       6 log.go:172] (0xc004ae53f0) Reply frame received for 1
I1223 02:23:05.857838       6 log.go:172] (0xc004ae53f0) (0xc002970fa0) Create stream
I1223 02:23:05.857848       6 log.go:172] (0xc004ae53f0) (0xc002970fa0) Stream added, broadcasting: 3
I1223 02:23:05.858890       6 log.go:172] (0xc004ae53f0) Reply frame received for 3
I1223 02:23:05.858927       6 log.go:172] (0xc004ae53f0) (0xc002971040) Create stream
I1223 02:23:05.858941       6 log.go:172] (0xc004ae53f0) (0xc002971040) Stream added, broadcasting: 5
I1223 02:23:05.859896       6 log.go:172] (0xc004ae53f0) Reply frame received for 5
I1223 02:23:06.958324       6 log.go:172] (0xc004ae53f0) Data frame received for 3
I1223 02:23:06.958427       6 log.go:172] (0xc002970fa0) (3) Data frame handling
I1223 02:23:06.958447       6 log.go:172] (0xc002970fa0) (3) Data frame sent
I1223 02:23:06.958459       6 log.go:172] (0xc004ae53f0) Data frame received for 3
I1223 02:23:06.958479       6 log.go:172] (0xc002970fa0) (3) Data frame handling
I1223 02:23:06.958509       6 log.go:172] (0xc004ae53f0) Data frame received for 5
I1223 02:23:06.958531       6 log.go:172] (0xc002971040) (5) Data frame handling
I1223 02:23:06.959816       6 log.go:172] (0xc004ae53f0) Data frame received for 1
I1223 02:23:06.959839       6 log.go:172] (0xc0028ea140) (1) Data frame handling
I1223 02:23:06.959848       6 log.go:172] (0xc0028ea140) (1) Data frame sent
I1223 02:23:06.959875       6 log.go:172] (0xc004ae53f0) (0xc0028ea140) Stream removed, broadcasting: 1
I1223 02:23:06.959901       6 log.go:172] (0xc004ae53f0) Go away received
I1223 02:23:06.960060       6 log.go:172] (0xc004ae53f0) (0xc0028ea140) Stream removed, broadcasting: 1
I1223 02:23:06.960084       6 log.go:172] (0xc004ae53f0) (0xc002970fa0) Stream removed, broadcasting: 3
I1223 02:23:06.960110       6 log.go:172] (0xc004ae53f0) (0xc002971040) Stream removed, broadcasting: 5
Dec 23 02:23:06.960: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:23:06.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-113" for this suite.

• [SLOW TEST:32.449 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2065,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:23:06.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 23 02:23:07.093: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:07.131: INFO: Number of nodes with available pods: 0
Dec 23 02:23:07.131: INFO: Node jerma-worker is running more than one daemon pod
Dec 23 02:23:08.135: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:08.137: INFO: Number of nodes with available pods: 0
Dec 23 02:23:08.137: INFO: Node jerma-worker is running more than one daemon pod
Dec 23 02:23:09.156: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:09.159: INFO: Number of nodes with available pods: 0
Dec 23 02:23:09.159: INFO: Node jerma-worker is running more than one daemon pod
Dec 23 02:23:10.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:10.170: INFO: Number of nodes with available pods: 0
Dec 23 02:23:10.170: INFO: Node jerma-worker is running more than one daemon pod
Dec 23 02:23:11.135: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:11.139: INFO: Number of nodes with available pods: 2
Dec 23 02:23:11.139: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 23 02:23:11.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:11.171: INFO: Number of nodes with available pods: 1
Dec 23 02:23:11.171: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:12.180: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:12.183: INFO: Number of nodes with available pods: 1
Dec 23 02:23:12.183: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:13.307: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:13.310: INFO: Number of nodes with available pods: 1
Dec 23 02:23:13.310: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:14.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:14.179: INFO: Number of nodes with available pods: 1
Dec 23 02:23:14.179: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:15.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:15.179: INFO: Number of nodes with available pods: 1
Dec 23 02:23:15.179: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:16.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:16.179: INFO: Number of nodes with available pods: 1
Dec 23 02:23:16.179: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:17.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:17.182: INFO: Number of nodes with available pods: 1
Dec 23 02:23:17.182: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:18.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:18.177: INFO: Number of nodes with available pods: 1
Dec 23 02:23:18.177: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:19.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:19.178: INFO: Number of nodes with available pods: 1
Dec 23 02:23:19.178: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:20.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:20.180: INFO: Number of nodes with available pods: 1
Dec 23 02:23:20.180: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:21.177: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:21.181: INFO: Number of nodes with available pods: 1
Dec 23 02:23:21.181: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:22.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:22.180: INFO: Number of nodes with available pods: 1
Dec 23 02:23:22.180: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:23.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:23.180: INFO: Number of nodes with available pods: 1
Dec 23 02:23:23.180: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:24.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:24.180: INFO: Number of nodes with available pods: 1
Dec 23 02:23:24.180: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:25.175: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:25.178: INFO: Number of nodes with available pods: 1
Dec 23 02:23:25.178: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:26.205: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:26.208: INFO: Number of nodes with available pods: 1
Dec 23 02:23:26.208: INFO: Node jerma-worker2 is running more than one daemon pod
Dec 23 02:23:27.176: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Dec 23 02:23:27.180: INFO: Number of nodes with available pods: 2
Dec 23 02:23:27.180: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6051, will wait for the garbage collector to delete the pods
Dec 23 02:23:27.241: INFO: Deleting DaemonSet.extensions daemon-set took: 6.226224ms
Dec 23 02:23:27.641: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.265039ms
Dec 23 02:23:34.344: INFO: Number of nodes with available pods: 0
Dec 23 02:23:34.344: INFO: Number of running nodes: 0, number of available pods: 0
Dec 23 02:23:34.346: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6051/daemonsets","resourceVersion":"23936127"},"items":null}

Dec 23 02:23:34.349: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6051/pods","resourceVersion":"23936127"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:23:34.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6051" for this suite.

• [SLOW TEST:27.398 seconds]
[sig-apps] Daemon set [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":130,"skipped":2073,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:23:34.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-8b8e0f93-dfad-4011-83b7-b4d35fe79417
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-8b8e0f93-dfad-4011-83b7-b4d35fe79417
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:23:40.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6336" for this suite.

• [SLOW TEST:6.278 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2076,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:23:40.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-a1fc7376-95cf-445d-b7c0-150e6284f7e6
STEP: Creating a pod to test consume configMaps
Dec 23 02:23:40.742: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a" in namespace "projected-137" to be "success or failure"
Dec 23 02:23:40.745: INFO: Pod "pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.443682ms
Dec 23 02:23:42.838: INFO: Pod "pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0959937s
Dec 23 02:23:44.842: INFO: Pod "pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10002307s
STEP: Saw pod success
Dec 23 02:23:44.843: INFO: Pod "pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a" satisfied condition "success or failure"
Dec 23 02:23:44.845: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:23:44.872: INFO: Waiting for pod pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a to disappear
Dec 23 02:23:44.884: INFO: Pod pod-projected-configmaps-270eb9ff-6e0d-48b3-b361-d41a8a68e33a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:23:44.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-137" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2096,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:23:44.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the initial replication controller
Dec 23 02:23:44.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8478'
Dec 23 02:23:49.630: INFO: stderr: ""
Dec 23 02:23:49.630: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 02:23:49.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8478'
Dec 23 02:23:49.739: INFO: stderr: ""
Dec 23 02:23:49.739: INFO: stdout: "update-demo-nautilus-lpvwz update-demo-nautilus-vk7z5 "
Dec 23 02:23:49.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpvwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:23:49.832: INFO: stderr: ""
Dec 23 02:23:49.832: INFO: stdout: ""
Dec 23 02:23:49.832: INFO: update-demo-nautilus-lpvwz is created but not running
Dec 23 02:23:54.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8478'
Dec 23 02:23:54.946: INFO: stderr: ""
Dec 23 02:23:54.946: INFO: stdout: "update-demo-nautilus-lpvwz update-demo-nautilus-vk7z5 "
Dec 23 02:23:54.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpvwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:23:55.036: INFO: stderr: ""
Dec 23 02:23:55.036: INFO: stdout: "true"
Dec 23 02:23:55.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lpvwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:23:55.135: INFO: stderr: ""
Dec 23 02:23:55.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:23:55.135: INFO: validating pod update-demo-nautilus-lpvwz
Dec 23 02:23:55.139: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:23:55.139: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:23:55.139: INFO: update-demo-nautilus-lpvwz is verified up and running
Dec 23 02:23:55.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk7z5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:23:55.226: INFO: stderr: ""
Dec 23 02:23:55.226: INFO: stdout: "true"
Dec 23 02:23:55.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vk7z5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:23:55.320: INFO: stderr: ""
Dec 23 02:23:55.320: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:23:55.320: INFO: validating pod update-demo-nautilus-vk7z5
Dec 23 02:23:55.324: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:23:55.324: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:23:55.324: INFO: update-demo-nautilus-vk7z5 is verified up and running
STEP: rolling-update to new replication controller
Dec 23 02:23:55.326: INFO: scanned /root for discovery docs: 
Dec 23 02:23:55.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8478'
Dec 23 02:24:17.829: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 02:24:17.829: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 02:24:17.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8478'
Dec 23 02:24:17.921: INFO: stderr: ""
Dec 23 02:24:17.921: INFO: stdout: "update-demo-kitten-bctch update-demo-kitten-bgbzw "
Dec 23 02:24:17.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bctch -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:24:18.027: INFO: stderr: ""
Dec 23 02:24:18.027: INFO: stdout: "true"
Dec 23 02:24:18.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bctch -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:24:18.123: INFO: stderr: ""
Dec 23 02:24:18.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 02:24:18.123: INFO: validating pod update-demo-kitten-bctch
Dec 23 02:24:18.127: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 02:24:18.127: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 02:24:18.127: INFO: update-demo-kitten-bctch is verified up and running
Dec 23 02:24:18.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bgbzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:24:18.242: INFO: stderr: ""
Dec 23 02:24:18.242: INFO: stdout: "true"
Dec 23 02:24:18.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-bgbzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8478'
Dec 23 02:24:18.332: INFO: stderr: ""
Dec 23 02:24:18.332: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 23 02:24:18.332: INFO: validating pod update-demo-kitten-bgbzw
Dec 23 02:24:18.335: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 23 02:24:18.336: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 23 02:24:18.336: INFO: update-demo-kitten-bgbzw is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:24:18.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8478" for this suite.

• [SLOW TEST:33.450 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should do a rolling update of a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":278,"completed":133,"skipped":2102,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:24:18.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 23 02:24:18.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936423 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 02:24:18.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936423 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 23 02:24:28.445: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936487 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 23 02:24:28.445: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936487 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 23 02:24:38.453: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936522 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 02:24:38.453: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936522 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 23 02:24:48.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936552 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 02:24:48.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-a 2e68e004-aed1-4b36-86a2-92abc9b4504e 23936552 0 2020-12-23 02:24:18 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 23 02:24:58.468: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-b bdcde4bf-5ae7-4db7-8f78-2830e7f95cb3 23936582 0 2020-12-23 02:24:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 02:24:58.468: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-b bdcde4bf-5ae7-4db7-8f78-2830e7f95cb3 23936582 0 2020-12-23 02:24:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 23 02:25:08.475: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-b bdcde4bf-5ae7-4db7-8f78-2830e7f95cb3 23936612 0 2020-12-23 02:24:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 02:25:08.475: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5249 /api/v1/namespaces/watch-5249/configmaps/e2e-watch-test-configmap-b bdcde4bf-5ae7-4db7-8f78-2830e7f95cb3 23936612 0 2020-12-23 02:24:58 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:18.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5249" for this suite.

• [SLOW TEST:60.148 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":134,"skipped":2109,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:18.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 23 02:25:18.583: INFO: Waiting up to 5m0s for pod "pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd" in namespace "emptydir-5930" to be "success or failure"
Dec 23 02:25:18.605: INFO: Pod "pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.801631ms
Dec 23 02:25:20.649: INFO: Pod "pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065765723s
Dec 23 02:25:22.652: INFO: Pod "pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069408598s
STEP: Saw pod success
Dec 23 02:25:22.652: INFO: Pod "pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd" satisfied condition "success or failure"
Dec 23 02:25:22.655: INFO: Trying to get logs from node jerma-worker pod pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd container test-container: 
STEP: delete the pod
Dec 23 02:25:22.685: INFO: Waiting for pod pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd to disappear
Dec 23 02:25:22.690: INFO: Pod pod-2b567c52-b9d0-454d-a8eb-eaa66e31a8fd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:22.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5930" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2109,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:22.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 02:25:22.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563" in namespace "projected-9086" to be "success or failure"
Dec 23 02:25:22.786: INFO: Pod "downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563": Phase="Pending", Reason="", readiness=false. Elapsed: 3.896151ms
Dec 23 02:25:24.791: INFO: Pod "downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008390026s
Dec 23 02:25:26.794: INFO: Pod "downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012252348s
STEP: Saw pod success
Dec 23 02:25:26.794: INFO: Pod "downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563" satisfied condition "success or failure"
Dec 23 02:25:26.797: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563 container client-container: 
STEP: delete the pod
Dec 23 02:25:26.810: INFO: Waiting for pod downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563 to disappear
Dec 23 02:25:26.827: INFO: Pod downwardapi-volume-d979acf4-dd2c-4d23-b2e6-6353974b0563 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:26.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9086" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2123,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:26.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:25:26.914: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75" in namespace "security-context-test-7045" to be "success or failure"
Dec 23 02:25:26.922: INFO: Pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.68412ms
Dec 23 02:25:28.954: INFO: Pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040296674s
Dec 23 02:25:30.961: INFO: Pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047418007s
Dec 23 02:25:30.961: INFO: Pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75" satisfied condition "success or failure"
Dec 23 02:25:30.968: INFO: Got logs for pod "busybox-privileged-false-875899c9-1085-44bc-b505-a78425fdca75": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:30.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7045" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:30.974: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:25:31.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 23 02:25:33.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9753 create -f -'
Dec 23 02:25:37.293: INFO: stderr: ""
Dec 23 02:25:37.293: INFO: stdout: "e2e-test-crd-publish-openapi-7238-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Dec 23 02:25:37.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9753 delete e2e-test-crd-publish-openapi-7238-crds test-cr'
Dec 23 02:25:37.403: INFO: stderr: ""
Dec 23 02:25:37.403: INFO: stdout: "e2e-test-crd-publish-openapi-7238-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Dec 23 02:25:37.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9753 apply -f -'
Dec 23 02:25:37.646: INFO: stderr: ""
Dec 23 02:25:37.646: INFO: stdout: "e2e-test-crd-publish-openapi-7238-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Dec 23 02:25:37.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9753 delete e2e-test-crd-publish-openapi-7238-crds test-cr'
Dec 23 02:25:37.747: INFO: stderr: ""
Dec 23 02:25:37.747: INFO: stdout: "e2e-test-crd-publish-openapi-7238-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Dec 23 02:25:37.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7238-crds'
Dec 23 02:25:37.986: INFO: stderr: ""
Dec 23 02:25:37.986: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7238-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:39.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9753" for this suite.

• [SLOW TEST:8.956 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":138,"skipped":2162,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:39.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:25:40.529: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:25:42.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287140, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287140, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287140, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287140, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:25:45.787: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Dec 23 02:25:45.808: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:45.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4905" for this suite.
STEP: Destroying namespace "webhook-4905-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.003 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":139,"skipped":2165,"failed":0}
SSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] version v1
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:45.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:25:46.070: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 23 02:25:46.441: INFO: Waiting up to 5m0s for pod "pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb" in namespace "emptydir-3507" to be "success or failure"
Dec 23 02:25:46.460: INFO: Pod "pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 19.600419ms
Dec 23 02:25:48.464: INFO: Pod "pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023462235s
Dec 23 02:25:50.468: INFO: Pod "pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027517293s
STEP: Saw pod success
Dec 23 02:25:50.468: INFO: Pod "pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb" satisfied condition "success or failure"
Dec 23 02:25:50.471: INFO: Trying to get logs from node jerma-worker2 pod pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb container test-container: 
STEP: delete the pod
Dec 23 02:25:50.519: INFO: Waiting for pod pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb to disappear
Dec 23 02:25:50.591: INFO: Pod pod-c813ca87-5cdd-4571-bef1-6cfcf3e9e5bb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:50.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3507" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2186,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:50.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:25:50.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:54.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6292" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2203,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:54.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 23 02:25:54.843: INFO: Waiting up to 5m0s for pod "pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a" in namespace "emptydir-1776" to be "success or failure"
Dec 23 02:25:54.848: INFO: Pod "pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.593767ms
Dec 23 02:25:56.852: INFO: Pod "pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009357033s
Dec 23 02:25:58.856: INFO: Pod "pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013580207s
STEP: Saw pod success
Dec 23 02:25:58.856: INFO: Pod "pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a" satisfied condition "success or failure"
Dec 23 02:25:58.859: INFO: Trying to get logs from node jerma-worker pod pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a container test-container: 
STEP: delete the pod
Dec 23 02:25:58.875: INFO: Waiting for pod pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a to disappear
Dec 23 02:25:58.879: INFO: Pod pod-fcd07ce1-aa5a-47af-8bf0-e6de6546f31a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:58.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1776" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2206,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:58.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:25:58.971: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:25:59.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2117" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":144,"skipped":2219,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:25:59.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating the pod
Dec 23 02:26:04.197: INFO: Successfully updated pod "labelsupdate3f36515a-13ca-4e88-97f6-5bf479f43e47"
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:06.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3687" for this suite.

• [SLOW TEST:6.663 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2220,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:06.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 23 02:26:10.831: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2496 pod-service-account-8e17fc2e-28b2-41fb-b780-63eb93d92e7f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 23 02:26:11.088: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2496 pod-service-account-8e17fc2e-28b2-41fb-b780-63eb93d92e7f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 23 02:26:11.288: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2496 pod-service-account-8e17fc2e-28b2-41fb-b780-63eb93d92e7f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:11.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2496" for this suite.

• [SLOW TEST:5.275 seconds]
[sig-auth] ServiceAccounts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":146,"skipped":2237,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:11.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-d65d1579-b965-4961-86be-f189bbd64f68
STEP: Creating a pod to test consume configMaps
Dec 23 02:26:12.121: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443" in namespace "projected-2669" to be "success or failure"
Dec 23 02:26:12.236: INFO: Pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443": Phase="Pending", Reason="", readiness=false. Elapsed: 115.318973ms
Dec 23 02:26:14.740: INFO: Pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.619620725s
Dec 23 02:26:16.790: INFO: Pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.669329533s
Dec 23 02:26:18.794: INFO: Pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.67303911s
STEP: Saw pod success
Dec 23 02:26:18.794: INFO: Pod "pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443" satisfied condition "success or failure"
Dec 23 02:26:18.797: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:26:18.826: INFO: Waiting for pod pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443 to disappear
Dec 23 02:26:18.844: INFO: Pod pod-projected-configmaps-0c5ed219-3a8a-4c6a-80c4-9f17d1088443 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:18.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2669" for this suite.

• [SLOW TEST:7.373 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2238,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:18.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 23 02:26:18.924: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2313 /api/v1/namespaces/watch-2313/configmaps/e2e-watch-test-resource-version 617cd7f4-db01-45db-80e0-daabfee35095 23937144 0 2020-12-23 02:26:18 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 02:26:18.925: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2313 /api/v1/namespaces/watch-2313/configmaps/e2e-watch-test-resource-version 617cd7f4-db01-45db-80e0-daabfee35095 23937145 0 2020-12-23 02:26:18 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:18.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2313" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":148,"skipped":2241,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:18.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:26:19.027: INFO: Pod name rollover-pod: Found 0 pods out of 1
Dec 23 02:26:24.036: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 02:26:24.036: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 23 02:26:26.279: INFO: Creating deployment "test-rollover-deployment"
Dec 23 02:26:26.325: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 23 02:26:28.497: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 23 02:26:28.503: INFO: Ensure that both replica sets have 1 created replica
Dec 23 02:26:28.647: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 23 02:26:28.937: INFO: Updating deployment test-rollover-deployment
Dec 23 02:26:28.937: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 23 02:26:31.527: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 23 02:26:31.583: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 23 02:26:32.084: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:32.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287189, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:34.092: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:34.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287189, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:36.476: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:36.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:38.090: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:38.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:40.092: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:40.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:42.091: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:42.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:44.094: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:44.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:46.109: INFO: all replica sets need to contain the pod-template-hash label
Dec 23 02:26:46.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287187, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287196, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287186, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:26:48.091: INFO: 
Dec 23 02:26:48.091: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 23 02:26:48.099: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-6680 /apis/apps/v1/namespaces/deployment-6680/deployments/test-rollover-deployment 55d0d39d-bd0b-4e52-9802-19652200827f 23937322 2 2020-12-23 02:26:26 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003765618  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-12-23 02:26:27 +0000 UTC,LastTransitionTime:2020-12-23 02:26:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-12-23 02:26:46 +0000 UTC,LastTransitionTime:2020-12-23 02:26:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 02:26:48.102: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-6680 /apis/apps/v1/namespaces/deployment-6680/replicasets/test-rollover-deployment-574d6dfbff fac8b926-3d12-465a-9db7-5ee57d04bdc5 23937311 2 2020-12-23 02:26:28 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 55d0d39d-bd0b-4e52-9802-19652200827f 0xc003765c07 0xc003765c08}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003765c88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:26:48.102: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 23 02:26:48.103: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-6680 /apis/apps/v1/namespaces/deployment-6680/replicasets/test-rollover-controller 4c8bdd52-afe3-4926-9346-1a078e15552b 23937321 2 2020-12-23 02:26:19 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 55d0d39d-bd0b-4e52-9802-19652200827f 0xc003765b17 0xc003765b18}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003765b88  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:26:48.103: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-6680 /apis/apps/v1/namespaces/deployment-6680/replicasets/test-rollover-deployment-f6c94f66c d5031d9c-da63-4d8d-a89d-4c4e3cb95dc8 23937249 2 2020-12-23 02:26:26 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 55d0d39d-bd0b-4e52-9802-19652200827f 0xc003765d30 0xc003765d31}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003765dc8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:26:48.106: INFO: Pod "test-rollover-deployment-574d6dfbff-5wv7r" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5wv7r test-rollover-deployment-574d6dfbff- deployment-6680 /api/v1/namespaces/deployment-6680/pods/test-rollover-deployment-574d6dfbff-5wv7r fa9bb688-b7db-49d4-a6c8-892a0a70465d 23937272 0 2020-12-23 02:26:29 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff fac8b926-3d12-465a-9db7-5ee57d04bdc5 0xc0036f24d7 0xc0036f24d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cpgdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cpgdm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cpgdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:26:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:26:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:26:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:26:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.231,StartTime:2020-12-23 02:26:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:26:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://542242e66bf89e3f6580389c1ec88a3feab3726778623f2c5c26c08055563acd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:48.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6680" for this suite.

• [SLOW TEST:29.183 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":149,"skipped":2246,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:48.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating secret secrets-6938/secret-test-3ef1ac25-703b-454e-9e82-adb33090e853
STEP: Creating a pod to test consume secrets
Dec 23 02:26:48.476: INFO: Waiting up to 5m0s for pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61" in namespace "secrets-6938" to be "success or failure"
Dec 23 02:26:48.516: INFO: Pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61": Phase="Pending", Reason="", readiness=false. Elapsed: 39.898355ms
Dec 23 02:26:50.590: INFO: Pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113549116s
Dec 23 02:26:52.593: INFO: Pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61": Phase="Running", Reason="", readiness=true. Elapsed: 4.117044537s
Dec 23 02:26:54.596: INFO: Pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120413041s
STEP: Saw pod success
Dec 23 02:26:54.597: INFO: Pod "pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61" satisfied condition "success or failure"
Dec 23 02:26:54.598: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61 container env-test: 
STEP: delete the pod
Dec 23 02:26:54.643: INFO: Waiting for pod pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61 to disappear
Dec 23 02:26:54.787: INFO: Pod pod-configmaps-43aa2cd3-7443-4e0e-8de8-9e91aeb30c61 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:54.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6938" for this suite.

• [SLOW TEST:6.679 seconds]
[sig-api-machinery] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2257,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:54.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-map-f8b3b775-9b06-4662-a60e-a0080f25f31c
STEP: Creating a pod to test consume secrets
Dec 23 02:26:54.863: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d" in namespace "projected-2811" to be "success or failure"
Dec 23 02:26:54.882: INFO: Pod "pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.603611ms
Dec 23 02:26:56.886: INFO: Pod "pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022927475s
Dec 23 02:26:58.890: INFO: Pod "pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026665979s
STEP: Saw pod success
Dec 23 02:26:58.890: INFO: Pod "pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d" satisfied condition "success or failure"
Dec 23 02:26:58.892: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 02:26:58.923: INFO: Waiting for pod pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d to disappear
Dec 23 02:26:58.928: INFO: Pod pod-projected-secrets-c70494bd-8922-415f-a78f-1aea8942a36d no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:26:58.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2811" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2257,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:26:58.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override arguments
Dec 23 02:26:59.063: INFO: Waiting up to 5m0s for pod "client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017" in namespace "containers-3228" to be "success or failure"
Dec 23 02:26:59.395: INFO: Pod "client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017": Phase="Pending", Reason="", readiness=false. Elapsed: 331.959704ms
Dec 23 02:27:01.399: INFO: Pod "client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335772187s
Dec 23 02:27:03.403: INFO: Pod "client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.339638705s
STEP: Saw pod success
Dec 23 02:27:03.403: INFO: Pod "client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017" satisfied condition "success or failure"
Dec 23 02:27:03.404: INFO: Trying to get logs from node jerma-worker2 pod client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017 container test-container: 
STEP: delete the pod
Dec 23 02:27:03.433: INFO: Waiting for pod client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017 to disappear
Dec 23 02:27:03.443: INFO: Pod client-containers-3d339665-4bf3-4b7a-b976-4ff0ee86f017 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:27:03.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3228" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:27:03.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Dec 23 02:27:03.526: INFO: Created pod &Pod{ObjectMeta:{dns-5797  dns-5797 /api/v1/namespaces/dns-5797/pods/dns-5797 e5ea7ff6-9b31-4833-852e-c5ca9733682d 23937465 0 2020-12-23 02:27:03 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wwsb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wwsb9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wwsb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Dec 23 02:27:07.537: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5797 PodName:dns-5797 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:27:07.537: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:27:07.571561       6 log.go:172] (0xc00213abb0) (0xc0024c3680) Create stream
I1223 02:27:07.571584       6 log.go:172] (0xc00213abb0) (0xc0024c3680) Stream added, broadcasting: 1
I1223 02:27:07.573354       6 log.go:172] (0xc00213abb0) Reply frame received for 1
I1223 02:27:07.573417       6 log.go:172] (0xc00213abb0) (0xc001c05040) Create stream
I1223 02:27:07.573444       6 log.go:172] (0xc00213abb0) (0xc001c05040) Stream added, broadcasting: 3
I1223 02:27:07.574616       6 log.go:172] (0xc00213abb0) Reply frame received for 3
I1223 02:27:07.574674       6 log.go:172] (0xc00213abb0) (0xc001c05180) Create stream
I1223 02:27:07.574693       6 log.go:172] (0xc00213abb0) (0xc001c05180) Stream added, broadcasting: 5
I1223 02:27:07.575918       6 log.go:172] (0xc00213abb0) Reply frame received for 5
I1223 02:27:07.676146       6 log.go:172] (0xc00213abb0) Data frame received for 3
I1223 02:27:07.676172       6 log.go:172] (0xc001c05040) (3) Data frame handling
I1223 02:27:07.676191       6 log.go:172] (0xc001c05040) (3) Data frame sent
I1223 02:27:07.678980       6 log.go:172] (0xc00213abb0) Data frame received for 3
I1223 02:27:07.679028       6 log.go:172] (0xc001c05040) (3) Data frame handling
I1223 02:27:07.679358       6 log.go:172] (0xc00213abb0) Data frame received for 5
I1223 02:27:07.679408       6 log.go:172] (0xc001c05180) (5) Data frame handling
I1223 02:27:07.681443       6 log.go:172] (0xc00213abb0) Data frame received for 1
I1223 02:27:07.681468       6 log.go:172] (0xc0024c3680) (1) Data frame handling
I1223 02:27:07.681490       6 log.go:172] (0xc0024c3680) (1) Data frame sent
I1223 02:27:07.681596       6 log.go:172] (0xc00213abb0) (0xc0024c3680) Stream removed, broadcasting: 1
I1223 02:27:07.681630       6 log.go:172] (0xc00213abb0) Go away received
I1223 02:27:07.681749       6 log.go:172] (0xc00213abb0) (0xc0024c3680) Stream removed, broadcasting: 1
I1223 02:27:07.681778       6 log.go:172] (0xc00213abb0) (0xc001c05040) Stream removed, broadcasting: 3
I1223 02:27:07.681794       6 log.go:172] (0xc00213abb0) (0xc001c05180) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Dec 23 02:27:07.681: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5797 PodName:dns-5797 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:27:07.681: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:27:07.712927       6 log.go:172] (0xc002521080) (0xc002a00140) Create stream
I1223 02:27:07.712957       6 log.go:172] (0xc002521080) (0xc002a00140) Stream added, broadcasting: 1
I1223 02:27:07.714699       6 log.go:172] (0xc002521080) Reply frame received for 1
I1223 02:27:07.714759       6 log.go:172] (0xc002521080) (0xc0024c3860) Create stream
I1223 02:27:07.714778       6 log.go:172] (0xc002521080) (0xc0024c3860) Stream added, broadcasting: 3
I1223 02:27:07.715603       6 log.go:172] (0xc002521080) Reply frame received for 3
I1223 02:27:07.715633       6 log.go:172] (0xc002521080) (0xc0026ac500) Create stream
I1223 02:27:07.715646       6 log.go:172] (0xc002521080) (0xc0026ac500) Stream added, broadcasting: 5
I1223 02:27:07.716824       6 log.go:172] (0xc002521080) Reply frame received for 5
I1223 02:27:07.785970       6 log.go:172] (0xc002521080) Data frame received for 3
I1223 02:27:07.785991       6 log.go:172] (0xc0024c3860) (3) Data frame handling
I1223 02:27:07.786004       6 log.go:172] (0xc0024c3860) (3) Data frame sent
I1223 02:27:07.789051       6 log.go:172] (0xc002521080) Data frame received for 5
I1223 02:27:07.789076       6 log.go:172] (0xc0026ac500) (5) Data frame handling
I1223 02:27:07.789149       6 log.go:172] (0xc002521080) Data frame received for 3
I1223 02:27:07.789161       6 log.go:172] (0xc0024c3860) (3) Data frame handling
I1223 02:27:07.790437       6 log.go:172] (0xc002521080) Data frame received for 1
I1223 02:27:07.790457       6 log.go:172] (0xc002a00140) (1) Data frame handling
I1223 02:27:07.790471       6 log.go:172] (0xc002a00140) (1) Data frame sent
I1223 02:27:07.790484       6 log.go:172] (0xc002521080) (0xc002a00140) Stream removed, broadcasting: 1
I1223 02:27:07.790497       6 log.go:172] (0xc002521080) Go away received
I1223 02:27:07.790588       6 log.go:172] (0xc002521080) (0xc002a00140) Stream removed, broadcasting: 1
I1223 02:27:07.790637       6 log.go:172] (0xc002521080) (0xc0024c3860) Stream removed, broadcasting: 3
I1223 02:27:07.790671       6 log.go:172] (0xc002521080) (0xc0026ac500) Stream removed, broadcasting: 5
Dec 23 02:27:07.790: INFO: Deleting pod dns-5797...
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:27:07.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5797" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":153,"skipped":2290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:27:07.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3819.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3819.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3819.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:27:14.229: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.255: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.258: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.261: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.270: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.272: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.275: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.277: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:14.282: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:19.288: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.291: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.298: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.308: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.311: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.315: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.318: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:19.324: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:24.295: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.300: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.303: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.306: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.315: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.317: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.320: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.322: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:24.333: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:29.287: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.290: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.293: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.297: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.305: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.308: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.311: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.313: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:29.319: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:34.287: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.291: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.295: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.299: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.306: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.308: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.310: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.312: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:34.317: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:39.287: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.291: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.294: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.298: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.307: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.310: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.313: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.316: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local from pod dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367: the server could not find the requested resource (get pods dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367)
Dec 23 02:27:39.322: INFO: Lookups using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3819.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3819.svc.cluster.local jessie_udp@dns-test-service-2.dns-3819.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3819.svc.cluster.local]

Dec 23 02:27:44.318: INFO: DNS probes using dns-3819/dns-test-98fe1e89-6a83-4f9c-8eb4-5452817f7367 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:27:44.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3819" for this suite.

• [SLOW TEST:37.094 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":154,"skipped":2325,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:27:44.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-122
[It] should have a working scale subresource [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating statefulset ss in namespace statefulset-122
Dec 23 02:27:45.217: INFO: Found 0 stateful pods, waiting for 1
Dec 23 02:27:55.222: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 02:27:55.236: INFO: Deleting all statefulset in ns statefulset-122
Dec 23 02:27:55.238: INFO: Scaling statefulset ss to 0
Dec 23 02:28:15.320: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:28:15.323: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:15.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-122" for this suite.

• [SLOW TEST:30.435 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should have a working scale subresource [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":155,"skipped":2334,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:15.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:22.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-58" for this suite.

• [SLOW TEST:7.072 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":156,"skipped":2343,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:22.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-07c81418-283d-474c-91eb-9b2f1124b050
STEP: Creating a pod to test consume configMaps
Dec 23 02:28:22.513: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f" in namespace "projected-8239" to be "success or failure"
Dec 23 02:28:22.524: INFO: Pod "pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.243384ms
Dec 23 02:28:24.528: INFO: Pod "pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015468994s
Dec 23 02:28:26.532: INFO: Pod "pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019389711s
STEP: Saw pod success
Dec 23 02:28:26.532: INFO: Pod "pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f" satisfied condition "success or failure"
Dec 23 02:28:26.535: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:28:26.586: INFO: Waiting for pod pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f to disappear
Dec 23 02:28:26.644: INFO: Pod pod-projected-configmaps-e08f9fc3-ea61-402e-ae5f-75a72a00e29f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:26.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8239" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2361,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:26.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:28:26.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9775
I1223 02:28:26.866682       6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9775, replica count: 1
I1223 02:28:27.917139       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 02:28:28.917437       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 02:28:29.917635       6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 23 02:28:30.088: INFO: Created: latency-svc-58nkk
Dec 23 02:28:30.097: INFO: Got endpoints: latency-svc-58nkk [79.463968ms]
Dec 23 02:28:30.127: INFO: Created: latency-svc-2x7zz
Dec 23 02:28:30.142: INFO: Got endpoints: latency-svc-2x7zz [44.561147ms]
Dec 23 02:28:30.163: INFO: Created: latency-svc-hbwl7
Dec 23 02:28:30.175: INFO: Got endpoints: latency-svc-hbwl7 [78.468117ms]
Dec 23 02:28:30.232: INFO: Created: latency-svc-ggdbz
Dec 23 02:28:30.265: INFO: Created: latency-svc-g8468
Dec 23 02:28:30.265: INFO: Got endpoints: latency-svc-ggdbz [167.980072ms]
Dec 23 02:28:30.280: INFO: Got endpoints: latency-svc-g8468 [183.259037ms]
Dec 23 02:28:30.301: INFO: Created: latency-svc-bjtcl
Dec 23 02:28:30.314: INFO: Got endpoints: latency-svc-bjtcl [216.851728ms]
Dec 23 02:28:30.370: INFO: Created: latency-svc-99fw8
Dec 23 02:28:30.377: INFO: Got endpoints: latency-svc-99fw8 [279.549207ms]
Dec 23 02:28:30.397: INFO: Created: latency-svc-pb754
Dec 23 02:28:30.410: INFO: Got endpoints: latency-svc-pb754 [312.650284ms]
Dec 23 02:28:30.433: INFO: Created: latency-svc-k7ntf
Dec 23 02:28:30.446: INFO: Got endpoints: latency-svc-k7ntf [349.175707ms]
Dec 23 02:28:30.463: INFO: Created: latency-svc-hl4b2
Dec 23 02:28:30.519: INFO: Got endpoints: latency-svc-hl4b2 [421.926591ms]
Dec 23 02:28:30.525: INFO: Created: latency-svc-ll6bk
Dec 23 02:28:30.533: INFO: Got endpoints: latency-svc-ll6bk [436.108766ms]
Dec 23 02:28:30.613: INFO: Created: latency-svc-6hnpk
Dec 23 02:28:30.699: INFO: Got endpoints: latency-svc-6hnpk [601.885837ms]
Dec 23 02:28:30.739: INFO: Created: latency-svc-rgpwl
Dec 23 02:28:30.760: INFO: Got endpoints: latency-svc-rgpwl [662.086166ms]
Dec 23 02:28:30.781: INFO: Created: latency-svc-drnfc
Dec 23 02:28:30.796: INFO: Got endpoints: latency-svc-drnfc [698.263307ms]
Dec 23 02:28:30.843: INFO: Created: latency-svc-p6jsv
Dec 23 02:28:30.851: INFO: Got endpoints: latency-svc-p6jsv [753.50467ms]
Dec 23 02:28:30.883: INFO: Created: latency-svc-nvptn
Dec 23 02:28:30.898: INFO: Got endpoints: latency-svc-nvptn [800.974494ms]
Dec 23 02:28:30.925: INFO: Created: latency-svc-7n6wc
Dec 23 02:28:30.998: INFO: Got endpoints: latency-svc-7n6wc [856.537034ms]
Dec 23 02:28:31.003: INFO: Created: latency-svc-qqkfm
Dec 23 02:28:31.037: INFO: Got endpoints: latency-svc-qqkfm [861.196484ms]
Dec 23 02:28:31.075: INFO: Created: latency-svc-bml58
Dec 23 02:28:31.172: INFO: Got endpoints: latency-svc-bml58 [906.779182ms]
Dec 23 02:28:31.174: INFO: Created: latency-svc-4v7tj
Dec 23 02:28:31.187: INFO: Got endpoints: latency-svc-4v7tj [906.202244ms]
Dec 23 02:28:31.213: INFO: Created: latency-svc-p7k8d
Dec 23 02:28:31.229: INFO: Got endpoints: latency-svc-p7k8d [915.335234ms]
Dec 23 02:28:31.255: INFO: Created: latency-svc-kr4fm
Dec 23 02:28:31.304: INFO: Got endpoints: latency-svc-kr4fm [926.787271ms]
Dec 23 02:28:31.335: INFO: Created: latency-svc-jpz7z
Dec 23 02:28:31.350: INFO: Got endpoints: latency-svc-jpz7z [939.381674ms]
Dec 23 02:28:31.370: INFO: Created: latency-svc-556l2
Dec 23 02:28:31.386: INFO: Got endpoints: latency-svc-556l2 [939.561604ms]
Dec 23 02:28:31.461: INFO: Created: latency-svc-w8dxf
Dec 23 02:28:31.470: INFO: Got endpoints: latency-svc-w8dxf [950.381751ms]
Dec 23 02:28:31.495: INFO: Created: latency-svc-tvxkd
Dec 23 02:28:31.512: INFO: Got endpoints: latency-svc-tvxkd [979.073142ms]
Dec 23 02:28:31.543: INFO: Created: latency-svc-7v4xr
Dec 23 02:28:31.598: INFO: Got endpoints: latency-svc-7v4xr [898.349039ms]
Dec 23 02:28:31.609: INFO: Created: latency-svc-pssrd
Dec 23 02:28:31.621: INFO: Got endpoints: latency-svc-pssrd [861.281224ms]
Dec 23 02:28:31.645: INFO: Created: latency-svc-84q6h
Dec 23 02:28:31.659: INFO: Got endpoints: latency-svc-84q6h [862.79913ms]
Dec 23 02:28:31.687: INFO: Created: latency-svc-r9rfp
Dec 23 02:28:31.735: INFO: Got endpoints: latency-svc-r9rfp [884.28432ms]
Dec 23 02:28:31.748: INFO: Created: latency-svc-vx4kb
Dec 23 02:28:31.759: INFO: Got endpoints: latency-svc-vx4kb [861.116843ms]
Dec 23 02:28:31.790: INFO: Created: latency-svc-p69d9
Dec 23 02:28:31.813: INFO: Got endpoints: latency-svc-p69d9 [814.885035ms]
Dec 23 02:28:31.865: INFO: Created: latency-svc-8fh9t
Dec 23 02:28:31.868: INFO: Got endpoints: latency-svc-8fh9t [831.092319ms]
Dec 23 02:28:31.897: INFO: Created: latency-svc-lwmv7
Dec 23 02:28:31.910: INFO: Got endpoints: latency-svc-lwmv7 [738.210183ms]
Dec 23 02:28:31.940: INFO: Created: latency-svc-bqs8f
Dec 23 02:28:32.005: INFO: Got endpoints: latency-svc-bqs8f [817.863242ms]
Dec 23 02:28:32.011: INFO: Created: latency-svc-2wc5k
Dec 23 02:28:32.021: INFO: Got endpoints: latency-svc-2wc5k [791.882609ms]
Dec 23 02:28:32.047: INFO: Created: latency-svc-c6m7p
Dec 23 02:28:32.057: INFO: Got endpoints: latency-svc-c6m7p [753.614477ms]
Dec 23 02:28:32.082: INFO: Created: latency-svc-nvvm9
Dec 23 02:28:32.103: INFO: Got endpoints: latency-svc-nvvm9 [753.123682ms]
Dec 23 02:28:32.180: INFO: Created: latency-svc-ppng4
Dec 23 02:28:32.184: INFO: Got endpoints: latency-svc-ppng4 [797.438707ms]
Dec 23 02:28:32.221: INFO: Created: latency-svc-rr7tl
Dec 23 02:28:32.245: INFO: Got endpoints: latency-svc-rr7tl [775.461983ms]
Dec 23 02:28:32.275: INFO: Created: latency-svc-vsj5m
Dec 23 02:28:32.310: INFO: Got endpoints: latency-svc-vsj5m [797.147306ms]
Dec 23 02:28:32.346: INFO: Created: latency-svc-s8qwq
Dec 23 02:28:32.359: INFO: Got endpoints: latency-svc-s8qwq [761.306111ms]
Dec 23 02:28:32.377: INFO: Created: latency-svc-lrlcx
Dec 23 02:28:32.389: INFO: Got endpoints: latency-svc-lrlcx [767.588269ms]
Dec 23 02:28:32.456: INFO: Created: latency-svc-dxc9t
Dec 23 02:28:32.490: INFO: Got endpoints: latency-svc-dxc9t [831.454618ms]
Dec 23 02:28:32.527: INFO: Created: latency-svc-q4jml
Dec 23 02:28:32.540: INFO: Got endpoints: latency-svc-q4jml [804.643501ms]
Dec 23 02:28:32.611: INFO: Created: latency-svc-q4v6r
Dec 23 02:28:32.653: INFO: Created: latency-svc-xnkr5
Dec 23 02:28:32.653: INFO: Got endpoints: latency-svc-q4v6r [893.267945ms]
Dec 23 02:28:32.672: INFO: Got endpoints: latency-svc-xnkr5 [858.781698ms]
Dec 23 02:28:32.706: INFO: Created: latency-svc-qvq48
Dec 23 02:28:32.747: INFO: Got endpoints: latency-svc-qvq48 [879.057197ms]
Dec 23 02:28:32.754: INFO: Created: latency-svc-nhtwd
Dec 23 02:28:32.768: INFO: Got endpoints: latency-svc-nhtwd [858.223146ms]
Dec 23 02:28:32.791: INFO: Created: latency-svc-ngvrh
Dec 23 02:28:32.811: INFO: Got endpoints: latency-svc-ngvrh [806.168678ms]
Dec 23 02:28:32.891: INFO: Created: latency-svc-mbk98
Dec 23 02:28:32.893: INFO: Got endpoints: latency-svc-mbk98 [872.053673ms]
Dec 23 02:28:32.938: INFO: Created: latency-svc-vznhr
Dec 23 02:28:32.943: INFO: Got endpoints: latency-svc-vznhr [885.310572ms]
Dec 23 02:28:32.971: INFO: Created: latency-svc-krnxx
Dec 23 02:28:32.986: INFO: Got endpoints: latency-svc-krnxx [882.928212ms]
Dec 23 02:28:33.061: INFO: Created: latency-svc-ng5cm
Dec 23 02:28:33.088: INFO: Got endpoints: latency-svc-ng5cm [904.489493ms]
Dec 23 02:28:33.115: INFO: Created: latency-svc-z8tsg
Dec 23 02:28:33.178: INFO: Got endpoints: latency-svc-z8tsg [932.710236ms]
Dec 23 02:28:33.217: INFO: Created: latency-svc-xcwkk
Dec 23 02:28:33.239: INFO: Got endpoints: latency-svc-xcwkk [929.074475ms]
Dec 23 02:28:33.265: INFO: Created: latency-svc-xgczb
Dec 23 02:28:33.316: INFO: Got endpoints: latency-svc-xgczb [956.634311ms]
Dec 23 02:28:33.349: INFO: Created: latency-svc-2tdmx
Dec 23 02:28:33.364: INFO: Got endpoints: latency-svc-2tdmx [975.600274ms]
Dec 23 02:28:33.397: INFO: Created: latency-svc-nvp2p
Dec 23 02:28:33.413: INFO: Got endpoints: latency-svc-nvp2p [922.494283ms]
Dec 23 02:28:33.475: INFO: Created: latency-svc-4596k
Dec 23 02:28:33.497: INFO: Got endpoints: latency-svc-4596k [957.166489ms]
Dec 23 02:28:33.535: INFO: Created: latency-svc-v9zpd
Dec 23 02:28:33.552: INFO: Got endpoints: latency-svc-v9zpd [898.743858ms]
Dec 23 02:28:33.603: INFO: Created: latency-svc-w4xgr
Dec 23 02:28:33.606: INFO: Got endpoints: latency-svc-w4xgr [933.503012ms]
Dec 23 02:28:33.667: INFO: Created: latency-svc-hrkpc
Dec 23 02:28:33.696: INFO: Got endpoints: latency-svc-hrkpc [949.019392ms]
Dec 23 02:28:33.759: INFO: Created: latency-svc-gxnwf
Dec 23 02:28:33.762: INFO: Got endpoints: latency-svc-gxnwf [993.412005ms]
Dec 23 02:28:33.793: INFO: Created: latency-svc-6q4b2
Dec 23 02:28:33.804: INFO: Got endpoints: latency-svc-6q4b2 [993.23406ms]
Dec 23 02:28:33.835: INFO: Created: latency-svc-7bldl
Dec 23 02:28:33.846: INFO: Got endpoints: latency-svc-7bldl [952.663109ms]
Dec 23 02:28:33.896: INFO: Created: latency-svc-8rb6l
Dec 23 02:28:33.900: INFO: Got endpoints: latency-svc-8rb6l [957.450953ms]
Dec 23 02:28:33.930: INFO: Created: latency-svc-54dss
Dec 23 02:28:33.954: INFO: Got endpoints: latency-svc-54dss [968.721059ms]
Dec 23 02:28:33.991: INFO: Created: latency-svc-qw86m
Dec 23 02:28:34.041: INFO: Got endpoints: latency-svc-qw86m [952.425281ms]
Dec 23 02:28:34.063: INFO: Created: latency-svc-qp2ll
Dec 23 02:28:34.086: INFO: Got endpoints: latency-svc-qp2ll [908.47856ms]
Dec 23 02:28:34.111: INFO: Created: latency-svc-kdm99
Dec 23 02:28:34.124: INFO: Got endpoints: latency-svc-kdm99 [884.895291ms]
Dec 23 02:28:34.189: INFO: Created: latency-svc-q75jm
Dec 23 02:28:34.202: INFO: Got endpoints: latency-svc-q75jm [886.481458ms]
Dec 23 02:28:34.328: INFO: Created: latency-svc-sprxt
Dec 23 02:28:34.334: INFO: Got endpoints: latency-svc-sprxt [969.688819ms]
Dec 23 02:28:34.357: INFO: Created: latency-svc-9cnkz
Dec 23 02:28:34.370: INFO: Got endpoints: latency-svc-9cnkz [957.70398ms]
Dec 23 02:28:34.393: INFO: Created: latency-svc-2smc2
Dec 23 02:28:34.406: INFO: Got endpoints: latency-svc-2smc2 [909.528277ms]
Dec 23 02:28:34.464: INFO: Created: latency-svc-7pt8m
Dec 23 02:28:34.485: INFO: Got endpoints: latency-svc-7pt8m [932.95172ms]
Dec 23 02:28:34.507: INFO: Created: latency-svc-pjqdn
Dec 23 02:28:34.521: INFO: Got endpoints: latency-svc-pjqdn [915.466561ms]
Dec 23 02:28:34.554: INFO: Created: latency-svc-w7shk
Dec 23 02:28:34.610: INFO: Got endpoints: latency-svc-w7shk [913.647519ms]
Dec 23 02:28:34.620: INFO: Created: latency-svc-jzzqj
Dec 23 02:28:34.636: INFO: Got endpoints: latency-svc-jzzqj [874.025305ms]
Dec 23 02:28:34.668: INFO: Created: latency-svc-hp6qs
Dec 23 02:28:34.686: INFO: Got endpoints: latency-svc-hp6qs [881.445434ms]
Dec 23 02:28:34.741: INFO: Created: latency-svc-zxn58
Dec 23 02:28:34.744: INFO: Got endpoints: latency-svc-zxn58 [898.044798ms]
Dec 23 02:28:34.770: INFO: Created: latency-svc-8vr4l
Dec 23 02:28:34.787: INFO: Got endpoints: latency-svc-8vr4l [886.298681ms]
Dec 23 02:28:34.806: INFO: Created: latency-svc-vcphb
Dec 23 02:28:34.816: INFO: Got endpoints: latency-svc-vcphb [861.976059ms]
Dec 23 02:28:34.885: INFO: Created: latency-svc-9x6lz
Dec 23 02:28:34.889: INFO: Got endpoints: latency-svc-9x6lz [848.316659ms]
Dec 23 02:28:34.957: INFO: Created: latency-svc-hlb67
Dec 23 02:28:34.974: INFO: Got endpoints: latency-svc-hlb67 [887.226572ms]
Dec 23 02:28:35.019: INFO: Created: latency-svc-mc2tr
Dec 23 02:28:35.022: INFO: Got endpoints: latency-svc-mc2tr [898.091448ms]
Dec 23 02:28:35.089: INFO: Created: latency-svc-c9kd2
Dec 23 02:28:35.106: INFO: Got endpoints: latency-svc-c9kd2 [903.469742ms]
Dec 23 02:28:35.167: INFO: Created: latency-svc-tb22c
Dec 23 02:28:35.184: INFO: Got endpoints: latency-svc-tb22c [849.810964ms]
Dec 23 02:28:35.221: INFO: Created: latency-svc-b2r8m
Dec 23 02:28:35.239: INFO: Got endpoints: latency-svc-b2r8m [868.735422ms]
Dec 23 02:28:35.293: INFO: Created: latency-svc-gwqvg
Dec 23 02:28:35.298: INFO: Got endpoints: latency-svc-gwqvg [891.539908ms]
Dec 23 02:28:35.322: INFO: Created: latency-svc-fx8bd
Dec 23 02:28:35.346: INFO: Got endpoints: latency-svc-fx8bd [861.879463ms]
Dec 23 02:28:35.376: INFO: Created: latency-svc-pkfkw
Dec 23 02:28:35.388: INFO: Got endpoints: latency-svc-pkfkw [867.377069ms]
Dec 23 02:28:35.436: INFO: Created: latency-svc-zsjwt
Dec 23 02:28:35.443: INFO: Got endpoints: latency-svc-zsjwt [833.001319ms]
Dec 23 02:28:35.466: INFO: Created: latency-svc-7f755
Dec 23 02:28:35.480: INFO: Got endpoints: latency-svc-7f755 [843.916357ms]
Dec 23 02:28:35.503: INFO: Created: latency-svc-2vm9c
Dec 23 02:28:35.515: INFO: Got endpoints: latency-svc-2vm9c [829.766436ms]
Dec 23 02:28:35.586: INFO: Created: latency-svc-mb9z4
Dec 23 02:28:35.588: INFO: Got endpoints: latency-svc-mb9z4 [844.189768ms]
Dec 23 02:28:35.617: INFO: Created: latency-svc-wmzq2
Dec 23 02:28:35.646: INFO: Got endpoints: latency-svc-wmzq2 [859.750551ms]
Dec 23 02:28:35.683: INFO: Created: latency-svc-r25pv
Dec 23 02:28:35.717: INFO: Got endpoints: latency-svc-r25pv [900.507391ms]
Dec 23 02:28:35.737: INFO: Created: latency-svc-zt5zt
Dec 23 02:28:35.751: INFO: Got endpoints: latency-svc-zt5zt [861.879765ms]
Dec 23 02:28:35.784: INFO: Created: latency-svc-q8j2f
Dec 23 02:28:35.805: INFO: Got endpoints: latency-svc-q8j2f [831.087332ms]
Dec 23 02:28:35.897: INFO: Created: latency-svc-mcdf8
Dec 23 02:28:35.901: INFO: Got endpoints: latency-svc-mcdf8 [878.9402ms]
Dec 23 02:28:35.923: INFO: Created: latency-svc-6gxmp
Dec 23 02:28:35.937: INFO: Got endpoints: latency-svc-6gxmp [831.405036ms]
Dec 23 02:28:35.958: INFO: Created: latency-svc-p5ld9
Dec 23 02:28:35.968: INFO: Got endpoints: latency-svc-p5ld9 [783.989985ms]
Dec 23 02:28:35.995: INFO: Created: latency-svc-6c4vc
Dec 23 02:28:36.034: INFO: Got endpoints: latency-svc-6c4vc [794.806811ms]
Dec 23 02:28:36.048: INFO: Created: latency-svc-mttkr
Dec 23 02:28:36.064: INFO: Got endpoints: latency-svc-mttkr [766.299996ms]
Dec 23 02:28:36.096: INFO: Created: latency-svc-6hpfw
Dec 23 02:28:36.118: INFO: Got endpoints: latency-svc-6hpfw [771.643432ms]
Dec 23 02:28:36.186: INFO: Created: latency-svc-qkbs9
Dec 23 02:28:36.197: INFO: Got endpoints: latency-svc-qkbs9 [808.147981ms]
Dec 23 02:28:36.230: INFO: Created: latency-svc-6d25f
Dec 23 02:28:36.257: INFO: Got endpoints: latency-svc-6d25f [814.651564ms]
Dec 23 02:28:36.304: INFO: Created: latency-svc-x56d6
Dec 23 02:28:36.311: INFO: Got endpoints: latency-svc-x56d6 [830.540793ms]
Dec 23 02:28:36.342: INFO: Created: latency-svc-x49rf
Dec 23 02:28:36.378: INFO: Got endpoints: latency-svc-x49rf [862.980768ms]
Dec 23 02:28:36.448: INFO: Created: latency-svc-7vqhl
Dec 23 02:28:36.475: INFO: Got endpoints: latency-svc-7vqhl [886.399701ms]
Dec 23 02:28:36.505: INFO: Created: latency-svc-f5xq8
Dec 23 02:28:36.522: INFO: Got endpoints: latency-svc-f5xq8 [875.404808ms]
Dec 23 02:28:36.541: INFO: Created: latency-svc-8l5b8
Dec 23 02:28:36.603: INFO: Got endpoints: latency-svc-8l5b8 [886.110793ms]
Dec 23 02:28:36.605: INFO: Created: latency-svc-ql524
Dec 23 02:28:36.612: INFO: Got endpoints: latency-svc-ql524 [861.230712ms]
Dec 23 02:28:36.636: INFO: Created: latency-svc-q8s4r
Dec 23 02:28:36.649: INFO: Got endpoints: latency-svc-q8s4r [844.121957ms]
Dec 23 02:28:36.690: INFO: Created: latency-svc-qq6pw
Dec 23 02:28:36.741: INFO: Got endpoints: latency-svc-qq6pw [840.079708ms]
Dec 23 02:28:36.756: INFO: Created: latency-svc-qb5g5
Dec 23 02:28:36.787: INFO: Got endpoints: latency-svc-qb5g5 [849.608892ms]
Dec 23 02:28:36.825: INFO: Created: latency-svc-r4tvf
Dec 23 02:28:36.840: INFO: Got endpoints: latency-svc-r4tvf [871.920088ms]
Dec 23 02:28:36.919: INFO: Created: latency-svc-wpzt4
Dec 23 02:28:36.930: INFO: Got endpoints: latency-svc-wpzt4 [896.012916ms]
Dec 23 02:28:36.948: INFO: Created: latency-svc-g64vl
Dec 23 02:28:36.960: INFO: Got endpoints: latency-svc-g64vl [895.670084ms]
Dec 23 02:28:36.978: INFO: Created: latency-svc-xdnvr
Dec 23 02:28:36.991: INFO: Got endpoints: latency-svc-xdnvr [872.383432ms]
Dec 23 02:28:37.047: INFO: Created: latency-svc-pfhw5
Dec 23 02:28:37.050: INFO: Got endpoints: latency-svc-pfhw5 [853.299758ms]
Dec 23 02:28:37.110: INFO: Created: latency-svc-n6phc
Dec 23 02:28:37.123: INFO: Got endpoints: latency-svc-n6phc [865.619896ms]
Dec 23 02:28:37.184: INFO: Created: latency-svc-2jsrs
Dec 23 02:28:37.184: INFO: Got endpoints: latency-svc-2jsrs [873.553827ms]
Dec 23 02:28:37.218: INFO: Created: latency-svc-hwnz6
Dec 23 02:28:37.236: INFO: Got endpoints: latency-svc-hwnz6 [857.178066ms]
Dec 23 02:28:37.260: INFO: Created: latency-svc-4gk2m
Dec 23 02:28:37.316: INFO: Got endpoints: latency-svc-4gk2m [840.826389ms]
Dec 23 02:28:37.332: INFO: Created: latency-svc-klxkw
Dec 23 02:28:37.346: INFO: Got endpoints: latency-svc-klxkw [824.501397ms]
Dec 23 02:28:37.387: INFO: Created: latency-svc-885xh
Dec 23 02:28:37.495: INFO: Got endpoints: latency-svc-885xh [892.249787ms]
Dec 23 02:28:37.497: INFO: Created: latency-svc-2tpgb
Dec 23 02:28:37.502: INFO: Got endpoints: latency-svc-2tpgb [889.814155ms]
Dec 23 02:28:37.530: INFO: Created: latency-svc-4dkz2
Dec 23 02:28:37.551: INFO: Got endpoints: latency-svc-4dkz2 [901.970529ms]
Dec 23 02:28:37.572: INFO: Created: latency-svc-62gqh
Dec 23 02:28:37.587: INFO: Got endpoints: latency-svc-62gqh [845.923836ms]
Dec 23 02:28:37.633: INFO: Created: latency-svc-qtlqk
Dec 23 02:28:37.636: INFO: Got endpoints: latency-svc-qtlqk [848.59055ms]
Dec 23 02:28:37.663: INFO: Created: latency-svc-7fxlf
Dec 23 02:28:37.677: INFO: Got endpoints: latency-svc-7fxlf [837.587115ms]
Dec 23 02:28:37.698: INFO: Created: latency-svc-qvx69
Dec 23 02:28:37.716: INFO: Got endpoints: latency-svc-qvx69 [785.960591ms]
Dec 23 02:28:37.771: INFO: Created: latency-svc-dqnt5
Dec 23 02:28:37.774: INFO: Got endpoints: latency-svc-dqnt5 [813.900327ms]
Dec 23 02:28:37.830: INFO: Created: latency-svc-jjdl5
Dec 23 02:28:37.847: INFO: Got endpoints: latency-svc-jjdl5 [855.919933ms]
Dec 23 02:28:37.866: INFO: Created: latency-svc-7khs8
Dec 23 02:28:37.904: INFO: Got endpoints: latency-svc-7khs8 [853.579159ms]
Dec 23 02:28:37.920: INFO: Created: latency-svc-cmt84
Dec 23 02:28:37.931: INFO: Got endpoints: latency-svc-cmt84 [807.910633ms]
Dec 23 02:28:37.951: INFO: Created: latency-svc-czsz2
Dec 23 02:28:37.962: INFO: Got endpoints: latency-svc-czsz2 [777.357957ms]
Dec 23 02:28:37.980: INFO: Created: latency-svc-sf7s7
Dec 23 02:28:38.028: INFO: Got endpoints: latency-svc-sf7s7 [792.300933ms]
Dec 23 02:28:38.046: INFO: Created: latency-svc-lfjb2
Dec 23 02:28:38.064: INFO: Got endpoints: latency-svc-lfjb2 [748.097329ms]
Dec 23 02:28:38.082: INFO: Created: latency-svc-l4mjj
Dec 23 02:28:38.095: INFO: Got endpoints: latency-svc-l4mjj [748.482505ms]
Dec 23 02:28:38.118: INFO: Created: latency-svc-qrpjz
Dec 23 02:28:38.178: INFO: Got endpoints: latency-svc-qrpjz [682.513531ms]
Dec 23 02:28:38.181: INFO: Created: latency-svc-wqkrm
Dec 23 02:28:38.191: INFO: Got endpoints: latency-svc-wqkrm [688.70991ms]
Dec 23 02:28:38.214: INFO: Created: latency-svc-42k68
Dec 23 02:28:38.238: INFO: Got endpoints: latency-svc-42k68 [686.86048ms]
Dec 23 02:28:38.268: INFO: Created: latency-svc-fgqv4
Dec 23 02:28:38.358: INFO: Got endpoints: latency-svc-fgqv4 [770.50891ms]
Dec 23 02:28:38.360: INFO: Created: latency-svc-m5jzh
Dec 23 02:28:38.370: INFO: Got endpoints: latency-svc-m5jzh [733.98355ms]
Dec 23 02:28:38.418: INFO: Created: latency-svc-5lqd7
Dec 23 02:28:38.436: INFO: Got endpoints: latency-svc-5lqd7 [758.687494ms]
Dec 23 02:28:38.454: INFO: Created: latency-svc-77wfz
Dec 23 02:28:38.490: INFO: Got endpoints: latency-svc-77wfz [773.949819ms]
Dec 23 02:28:38.514: INFO: Created: latency-svc-895rn
Dec 23 02:28:38.528: INFO: Got endpoints: latency-svc-895rn [754.0652ms]
Dec 23 02:28:38.562: INFO: Created: latency-svc-2qr2l
Dec 23 02:28:38.577: INFO: Got endpoints: latency-svc-2qr2l [730.456703ms]
Dec 23 02:28:38.628: INFO: Created: latency-svc-86znz
Dec 23 02:28:38.641: INFO: Got endpoints: latency-svc-86znz [736.926236ms]
Dec 23 02:28:38.683: INFO: Created: latency-svc-tfzgr
Dec 23 02:28:38.724: INFO: Got endpoints: latency-svc-tfzgr [792.655502ms]
Dec 23 02:28:38.783: INFO: Created: latency-svc-mh9dn
Dec 23 02:28:38.808: INFO: Got endpoints: latency-svc-mh9dn [846.433156ms]
Dec 23 02:28:38.808: INFO: Created: latency-svc-kz57h
Dec 23 02:28:38.824: INFO: Got endpoints: latency-svc-kz57h [795.647718ms]
Dec 23 02:28:38.850: INFO: Created: latency-svc-5wxcn
Dec 23 02:28:38.866: INFO: Got endpoints: latency-svc-5wxcn [801.786745ms]
Dec 23 02:28:38.946: INFO: Created: latency-svc-qtb44
Dec 23 02:28:38.970: INFO: Got endpoints: latency-svc-qtb44 [874.719587ms]
Dec 23 02:28:38.972: INFO: Created: latency-svc-nm7bb
Dec 23 02:28:38.997: INFO: Got endpoints: latency-svc-nm7bb [818.711456ms]
Dec 23 02:28:39.043: INFO: Created: latency-svc-htgbd
Dec 23 02:28:39.089: INFO: Got endpoints: latency-svc-htgbd [897.650173ms]
Dec 23 02:28:39.096: INFO: Created: latency-svc-fszwp
Dec 23 02:28:39.116: INFO: Got endpoints: latency-svc-fszwp [877.767348ms]
Dec 23 02:28:39.144: INFO: Created: latency-svc-wngmd
Dec 23 02:28:39.168: INFO: Got endpoints: latency-svc-wngmd [810.3309ms]
Dec 23 02:28:39.186: INFO: Created: latency-svc-5gvmq
Dec 23 02:28:39.226: INFO: Got endpoints: latency-svc-5gvmq [856.301647ms]
Dec 23 02:28:39.246: INFO: Created: latency-svc-qwrhg
Dec 23 02:28:39.264: INFO: Got endpoints: latency-svc-qwrhg [827.630873ms]
Dec 23 02:28:39.288: INFO: Created: latency-svc-fpphp
Dec 23 02:28:39.306: INFO: Got endpoints: latency-svc-fpphp [815.623774ms]
Dec 23 02:28:39.324: INFO: Created: latency-svc-5hdm7
Dec 23 02:28:39.364: INFO: Got endpoints: latency-svc-5hdm7 [835.669649ms]
Dec 23 02:28:39.384: INFO: Created: latency-svc-bk8mz
Dec 23 02:28:39.402: INFO: Got endpoints: latency-svc-bk8mz [825.33807ms]
Dec 23 02:28:39.440: INFO: Created: latency-svc-l8ljr
Dec 23 02:28:39.462: INFO: Got endpoints: latency-svc-l8ljr [821.104855ms]
Dec 23 02:28:39.520: INFO: Created: latency-svc-fppsw
Dec 23 02:28:39.523: INFO: Got endpoints: latency-svc-fppsw [798.864115ms]
Dec 23 02:28:39.559: INFO: Created: latency-svc-8h4ph
Dec 23 02:28:39.572: INFO: Got endpoints: latency-svc-8h4ph [763.991226ms]
Dec 23 02:28:39.595: INFO: Created: latency-svc-frms9
Dec 23 02:28:39.609: INFO: Got endpoints: latency-svc-frms9 [784.996065ms]
Dec 23 02:28:39.658: INFO: Created: latency-svc-g2thl
Dec 23 02:28:39.661: INFO: Got endpoints: latency-svc-g2thl [795.253959ms]
Dec 23 02:28:39.697: INFO: Created: latency-svc-4s85l
Dec 23 02:28:39.711: INFO: Got endpoints: latency-svc-4s85l [740.846872ms]
Dec 23 02:28:39.739: INFO: Created: latency-svc-m8zfp
Dec 23 02:28:39.819: INFO: Got endpoints: latency-svc-m8zfp [822.05183ms]
Dec 23 02:28:39.840: INFO: Created: latency-svc-dw2c8
Dec 23 02:28:39.849: INFO: Got endpoints: latency-svc-dw2c8 [760.084701ms]
Dec 23 02:28:39.870: INFO: Created: latency-svc-jzsk2
Dec 23 02:28:39.880: INFO: Got endpoints: latency-svc-jzsk2 [764.264288ms]
Dec 23 02:28:39.902: INFO: Created: latency-svc-79dcm
Dec 23 02:28:39.910: INFO: Got endpoints: latency-svc-79dcm [742.33336ms]
Dec 23 02:28:39.969: INFO: Created: latency-svc-9x99l
Dec 23 02:28:39.996: INFO: Got endpoints: latency-svc-9x99l [770.07838ms]
Dec 23 02:28:39.997: INFO: Created: latency-svc-qfcj5
Dec 23 02:28:40.012: INFO: Got endpoints: latency-svc-qfcj5 [747.899907ms]
Dec 23 02:28:40.044: INFO: Created: latency-svc-5qhrz
Dec 23 02:28:40.054: INFO: Got endpoints: latency-svc-5qhrz [747.853385ms]
Dec 23 02:28:40.125: INFO: Created: latency-svc-r79sz
Dec 23 02:28:40.128: INFO: Got endpoints: latency-svc-r79sz [763.633938ms]
Dec 23 02:28:40.166: INFO: Created: latency-svc-6b4vq
Dec 23 02:28:40.181: INFO: Got endpoints: latency-svc-6b4vq [778.221246ms]
Dec 23 02:28:40.212: INFO: Created: latency-svc-g724n
Dec 23 02:28:40.250: INFO: Got endpoints: latency-svc-g724n [788.589685ms]
Dec 23 02:28:40.266: INFO: Created: latency-svc-47zn9
Dec 23 02:28:40.283: INFO: Got endpoints: latency-svc-47zn9 [760.421404ms]
Dec 23 02:28:40.302: INFO: Created: latency-svc-2bd45
Dec 23 02:28:40.320: INFO: Got endpoints: latency-svc-2bd45 [747.801543ms]
Dec 23 02:28:40.411: INFO: Created: latency-svc-7cqt9
Dec 23 02:28:40.416: INFO: Got endpoints: latency-svc-7cqt9 [807.645476ms]
Dec 23 02:28:40.440: INFO: Created: latency-svc-85zwm
Dec 23 02:28:40.458: INFO: Got endpoints: latency-svc-85zwm [797.123844ms]
Dec 23 02:28:40.482: INFO: Created: latency-svc-n96qh
Dec 23 02:28:40.555: INFO: Got endpoints: latency-svc-n96qh [844.201298ms]
Dec 23 02:28:40.567: INFO: Created: latency-svc-mrvn9
Dec 23 02:28:40.579: INFO: Got endpoints: latency-svc-mrvn9 [759.807319ms]
Dec 23 02:28:40.596: INFO: Created: latency-svc-9j2cn
Dec 23 02:28:40.620: INFO: Got endpoints: latency-svc-9j2cn [771.339735ms]
Dec 23 02:28:40.650: INFO: Created: latency-svc-fm775
Dec 23 02:28:40.735: INFO: Got endpoints: latency-svc-fm775 [854.699229ms]
Dec 23 02:28:40.737: INFO: Created: latency-svc-bbl9q
Dec 23 02:28:40.742: INFO: Got endpoints: latency-svc-bbl9q [831.408884ms]
Dec 23 02:28:40.765: INFO: Created: latency-svc-sk6dk
Dec 23 02:28:40.794: INFO: Got endpoints: latency-svc-sk6dk [798.002803ms]
Dec 23 02:28:40.825: INFO: Created: latency-svc-vswcd
Dec 23 02:28:40.886: INFO: Got endpoints: latency-svc-vswcd [873.631671ms]
Dec 23 02:28:40.887: INFO: Created: latency-svc-b4rbs
Dec 23 02:28:40.892: INFO: Got endpoints: latency-svc-b4rbs [838.643094ms]
Dec 23 02:28:40.950: INFO: Created: latency-svc-d9zpk
Dec 23 02:28:40.965: INFO: Got endpoints: latency-svc-d9zpk [837.215407ms]
Dec 23 02:28:41.022: INFO: Created: latency-svc-2sbdn
Dec 23 02:28:41.025: INFO: Got endpoints: latency-svc-2sbdn [843.939439ms]
Dec 23 02:28:41.058: INFO: Created: latency-svc-8jt5x
Dec 23 02:28:41.079: INFO: Got endpoints: latency-svc-8jt5x [828.909299ms]
Dec 23 02:28:41.106: INFO: Created: latency-svc-pr5hd
Dec 23 02:28:41.162: INFO: Got endpoints: latency-svc-pr5hd [878.744362ms]
Dec 23 02:28:41.178: INFO: Created: latency-svc-hzvvb
Dec 23 02:28:41.200: INFO: Got endpoints: latency-svc-hzvvb [880.072932ms]
Dec 23 02:28:41.227: INFO: Created: latency-svc-5tcxv
Dec 23 02:28:41.242: INFO: Got endpoints: latency-svc-5tcxv [825.722961ms]
Dec 23 02:28:41.310: INFO: Created: latency-svc-fsbx2
Dec 23 02:28:41.320: INFO: Got endpoints: latency-svc-fsbx2 [862.216948ms]
Dec 23 02:28:41.321: INFO: Latencies: [44.561147ms 78.468117ms 167.980072ms 183.259037ms 216.851728ms 279.549207ms 312.650284ms 349.175707ms 421.926591ms 436.108766ms 601.885837ms 662.086166ms 682.513531ms 686.86048ms 688.70991ms 698.263307ms 730.456703ms 733.98355ms 736.926236ms 738.210183ms 740.846872ms 742.33336ms 747.801543ms 747.853385ms 747.899907ms 748.097329ms 748.482505ms 753.123682ms 753.50467ms 753.614477ms 754.0652ms 758.687494ms 759.807319ms 760.084701ms 760.421404ms 761.306111ms 763.633938ms 763.991226ms 764.264288ms 766.299996ms 767.588269ms 770.07838ms 770.50891ms 771.339735ms 771.643432ms 773.949819ms 775.461983ms 777.357957ms 778.221246ms 783.989985ms 784.996065ms 785.960591ms 788.589685ms 791.882609ms 792.300933ms 792.655502ms 794.806811ms 795.253959ms 795.647718ms 797.123844ms 797.147306ms 797.438707ms 798.002803ms 798.864115ms 800.974494ms 801.786745ms 804.643501ms 806.168678ms 807.645476ms 807.910633ms 808.147981ms 810.3309ms 813.900327ms 814.651564ms 814.885035ms 815.623774ms 817.863242ms 818.711456ms 821.104855ms 822.05183ms 824.501397ms 825.33807ms 825.722961ms 827.630873ms 828.909299ms 829.766436ms 830.540793ms 831.087332ms 831.092319ms 831.405036ms 831.408884ms 831.454618ms 833.001319ms 835.669649ms 837.215407ms 837.587115ms 838.643094ms 840.079708ms 840.826389ms 843.916357ms 843.939439ms 844.121957ms 844.189768ms 844.201298ms 845.923836ms 846.433156ms 848.316659ms 848.59055ms 849.608892ms 849.810964ms 853.299758ms 853.579159ms 854.699229ms 855.919933ms 856.301647ms 856.537034ms 857.178066ms 858.223146ms 858.781698ms 859.750551ms 861.116843ms 861.196484ms 861.230712ms 861.281224ms 861.879463ms 861.879765ms 861.976059ms 862.216948ms 862.79913ms 862.980768ms 865.619896ms 867.377069ms 868.735422ms 871.920088ms 872.053673ms 872.383432ms 873.553827ms 873.631671ms 874.025305ms 874.719587ms 875.404808ms 877.767348ms 878.744362ms 878.9402ms 879.057197ms 880.072932ms 881.445434ms 882.928212ms 884.28432ms 884.895291ms 885.310572ms 886.110793ms 886.298681ms 886.399701ms 886.481458ms 887.226572ms 889.814155ms 891.539908ms 892.249787ms 893.267945ms 895.670084ms 896.012916ms 897.650173ms 898.044798ms 898.091448ms 898.349039ms 898.743858ms 900.507391ms 901.970529ms 903.469742ms 904.489493ms 906.202244ms 906.779182ms 908.47856ms 909.528277ms 913.647519ms 915.335234ms 915.466561ms 922.494283ms 926.787271ms 929.074475ms 932.710236ms 932.95172ms 933.503012ms 939.381674ms 939.561604ms 949.019392ms 950.381751ms 952.425281ms 952.663109ms 956.634311ms 957.166489ms 957.450953ms 957.70398ms 968.721059ms 969.688819ms 975.600274ms 979.073142ms 993.23406ms 993.412005ms]
Dec 23 02:28:41.321: INFO: 50 %ile: 843.939439ms
Dec 23 02:28:41.321: INFO: 90 %ile: 929.074475ms
Dec 23 02:28:41.321: INFO: 99 %ile: 993.23406ms
Dec 23 02:28:41.321: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:41.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9775" for this suite.

• [SLOW TEST:14.748 seconds]
[sig-network] Service endpoints latency
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":158,"skipped":2390,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:41.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:28:41.501: INFO: Creating ReplicaSet my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8
Dec 23 02:28:41.591: INFO: Pod name my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8: Found 0 pods out of 1
Dec 23 02:28:46.602: INFO: Pod name my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8: Found 1 pods out of 1
Dec 23 02:28:46.602: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8" is running
Dec 23 02:28:46.621: INFO: Pod "my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8-z4sb2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 02:28:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 02:28:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 02:28:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-12-23 02:28:41 +0000 UTC Reason: Message:}])
Dec 23 02:28:46.621: INFO: Trying to dial the pod
Dec 23 02:28:51.723: INFO: Controller my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8: Got expected result from replica 1 [my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8-z4sb2]: "my-hostname-basic-5e568404-8ec7-4387-9194-790a85ec6ca8-z4sb2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:51.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7561" for this suite.

• [SLOW TEST:10.356 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":159,"skipped":2400,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:51.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-3e19a528-5c9a-464b-b1da-4eea9d4b2929
STEP: Creating a pod to test consume configMaps
Dec 23 02:28:51.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d" in namespace "configmap-7304" to be "success or failure"
Dec 23 02:28:51.926: INFO: Pod "pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.431465ms
Dec 23 02:28:54.030: INFO: Pod "pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115865358s
Dec 23 02:28:56.065: INFO: Pod "pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150175137s
STEP: Saw pod success
Dec 23 02:28:56.065: INFO: Pod "pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d" satisfied condition "success or failure"
Dec 23 02:28:56.092: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d container configmap-volume-test: 
STEP: delete the pod
Dec 23 02:28:56.277: INFO: Waiting for pod pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d to disappear
Dec 23 02:28:56.289: INFO: Pod pod-configmaps-ed4e01b2-b0ad-443c-9ed8-463aa184805d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:28:56.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7304" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2406,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:28:56.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 23 02:28:56.541: INFO: Waiting up to 5m0s for pod "pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870" in namespace "emptydir-3342" to be "success or failure"
Dec 23 02:28:56.553: INFO: Pod "pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870": Phase="Pending", Reason="", readiness=false. Elapsed: 11.681206ms
Dec 23 02:28:58.693: INFO: Pod "pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151845064s
Dec 23 02:29:00.697: INFO: Pod "pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156027139s
STEP: Saw pod success
Dec 23 02:29:00.697: INFO: Pod "pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870" satisfied condition "success or failure"
Dec 23 02:29:00.699: INFO: Trying to get logs from node jerma-worker2 pod pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870 container test-container: 
STEP: delete the pod
Dec 23 02:29:00.997: INFO: Waiting for pod pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870 to disappear
Dec 23 02:29:01.040: INFO: Pod pod-a2ea3334-4d8f-4bd4-a399-4cfdcbd09870 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:01.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3342" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2426,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:01.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test override command
Dec 23 02:29:01.409: INFO: Waiting up to 5m0s for pod "client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62" in namespace "containers-3464" to be "success or failure"
Dec 23 02:29:01.562: INFO: Pod "client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62": Phase="Pending", Reason="", readiness=false. Elapsed: 153.11078ms
Dec 23 02:29:03.670: INFO: Pod "client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.261113796s
Dec 23 02:29:05.825: INFO: Pod "client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.416455387s
STEP: Saw pod success
Dec 23 02:29:05.825: INFO: Pod "client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62" satisfied condition "success or failure"
Dec 23 02:29:05.840: INFO: Trying to get logs from node jerma-worker pod client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62 container test-container: 
STEP: delete the pod
Dec 23 02:29:06.033: INFO: Waiting for pod client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62 to disappear
Dec 23 02:29:06.068: INFO: Pod client-containers-ade6d0e0-9036-46f3-86a5-3f3a1e169c62 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:06.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3464" for this suite.

• [SLOW TEST:5.137 seconds]
[k8s.io] Docker Containers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2444,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:06.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should add annotations for pods in rc  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating Agnhost RC
Dec 23 02:29:06.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2756'
Dec 23 02:29:06.689: INFO: stderr: ""
Dec 23 02:29:06.689: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Dec 23 02:29:07.695: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:07.695: INFO: Found 0 / 1
Dec 23 02:29:08.692: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:08.692: INFO: Found 0 / 1
Dec 23 02:29:09.719: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:09.719: INFO: Found 0 / 1
Dec 23 02:29:10.707: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:10.707: INFO: Found 1 / 1
Dec 23 02:29:10.707: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 23 02:29:10.747: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:10.747: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 23 02:29:10.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-42xkt --namespace=kubectl-2756 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 23 02:29:10.928: INFO: stderr: ""
Dec 23 02:29:10.928: INFO: stdout: "pod/agnhost-master-42xkt patched\n"
STEP: checking annotations
Dec 23 02:29:11.012: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:29:11.012: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2756" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":163,"skipped":2450,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:11.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 23 02:29:11.153: INFO: Waiting up to 5m0s for pod "downward-api-abeeac03-0a27-4426-95a0-860ad37534a3" in namespace "downward-api-2272" to be "success or failure"
Dec 23 02:29:11.184: INFO: Pod "downward-api-abeeac03-0a27-4426-95a0-860ad37534a3": Phase="Pending", Reason="", readiness=false. Elapsed: 30.809257ms
Dec 23 02:29:13.187: INFO: Pod "downward-api-abeeac03-0a27-4426-95a0-860ad37534a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033916548s
Dec 23 02:29:15.191: INFO: Pod "downward-api-abeeac03-0a27-4426-95a0-860ad37534a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038079813s
STEP: Saw pod success
Dec 23 02:29:15.191: INFO: Pod "downward-api-abeeac03-0a27-4426-95a0-860ad37534a3" satisfied condition "success or failure"
Dec 23 02:29:15.194: INFO: Trying to get logs from node jerma-worker2 pod downward-api-abeeac03-0a27-4426-95a0-860ad37534a3 container dapi-container: 
STEP: delete the pod
Dec 23 02:29:15.237: INFO: Waiting for pod downward-api-abeeac03-0a27-4426-95a0-860ad37534a3 to disappear
Dec 23 02:29:15.250: INFO: Pod downward-api-abeeac03-0a27-4426-95a0-860ad37534a3 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:15.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2272" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2463,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:15.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 23 02:29:20.384: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:20.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6702" for this suite.

• [SLOW TEST:5.223 seconds]
[sig-apps] ReplicaSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":165,"skipped":2483,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:20.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Dec 23 02:29:20.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Dec 23 02:29:31.227: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 02:29:34.195: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:44.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7447" for this suite.

• [SLOW TEST:24.443 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":166,"skipped":2498,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:44.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-e171f1fc-3500-4105-b839-1843649db79a
STEP: Creating a pod to test consume configMaps
Dec 23 02:29:45.045: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b" in namespace "projected-2479" to be "success or failure"
Dec 23 02:29:45.049: INFO: Pod "pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.618712ms
Dec 23 02:29:47.077: INFO: Pod "pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03172636s
Dec 23 02:29:49.081: INFO: Pod "pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03584888s
STEP: Saw pod success
Dec 23 02:29:49.081: INFO: Pod "pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b" satisfied condition "success or failure"
Dec 23 02:29:49.085: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:29:49.109: INFO: Waiting for pod pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b to disappear
Dec 23 02:29:49.126: INFO: Pod pod-projected-configmaps-52b62e59-dcde-4d7b-9d56-6cf1813c1c7b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:29:49.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2479" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2504,"failed":0}

------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:29:49.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-814, will wait for the garbage collector to delete the pods
Dec 23 02:29:55.245: INFO: Deleting Job.batch foo took: 6.200147ms
Dec 23 02:30:09.446: INFO: Terminating Job.batch foo pods took: 14.200321377s
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:30:54.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-814" for this suite.

• [SLOW TEST:65.123 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":168,"skipped":2504,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:30:54.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Dec 23 02:30:54.535: INFO: >>> kubeConfig: /root/.kube/config
Dec 23 02:30:57.771: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:31:08.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3345" for this suite.

• [SLOW TEST:14.285 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":169,"skipped":2507,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:31:08.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-eeb60d7b-164d-4490-ac05-dcc418b90022 in namespace container-probe-284
Dec 23 02:31:12.645: INFO: Started pod busybox-eeb60d7b-164d-4490-ac05-dcc418b90022 in namespace container-probe-284
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 02:31:12.647: INFO: Initial restart count of pod busybox-eeb60d7b-164d-4490-ac05-dcc418b90022 is 0
Dec 23 02:32:02.891: INFO: Restart count of pod container-probe-284/busybox-eeb60d7b-164d-4490-ac05-dcc418b90022 is now 1 (50.243711201s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-284" for this suite.

• [SLOW TEST:54.396 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2531,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:02.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: validating api versions
Dec 23 02:32:02.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 23 02:32:03.375: INFO: stderr: ""
Dec 23 02:32:03.375: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:03.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8695" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":171,"skipped":2535,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:03.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create a job from an image, then delete the job [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: executing a command with run --rm and attach with stdin
Dec 23 02:32:03.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7198 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 23 02:32:07.014: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI1223 02:32:06.914883    1415 log.go:172] (0xc000a18bb0) (0xc00063fc20) Create stream\nI1223 02:32:06.914950    1415 log.go:172] (0xc000a18bb0) (0xc00063fc20) Stream added, broadcasting: 1\nI1223 02:32:06.917855    1415 log.go:172] (0xc000a18bb0) Reply frame received for 1\nI1223 02:32:06.917916    1415 log.go:172] (0xc000a18bb0) (0xc000562000) Create stream\nI1223 02:32:06.917934    1415 log.go:172] (0xc000a18bb0) (0xc000562000) Stream added, broadcasting: 3\nI1223 02:32:06.918773    1415 log.go:172] (0xc000a18bb0) Reply frame received for 3\nI1223 02:32:06.918801    1415 log.go:172] (0xc000a18bb0) (0xc00063fcc0) Create stream\nI1223 02:32:06.918809    1415 log.go:172] (0xc000a18bb0) (0xc00063fcc0) Stream added, broadcasting: 5\nI1223 02:32:06.919661    1415 log.go:172] (0xc000a18bb0) Reply frame received for 5\nI1223 02:32:06.919692    1415 log.go:172] (0xc000a18bb0) (0xc00063fd60) Create stream\nI1223 02:32:06.919702    1415 log.go:172] (0xc000a18bb0) (0xc00063fd60) Stream added, broadcasting: 7\nI1223 02:32:06.920605    1415 log.go:172] (0xc000a18bb0) Reply frame received for 7\nI1223 02:32:06.920723    1415 log.go:172] (0xc000562000) (3) Writing data frame\nI1223 02:32:06.920816    1415 log.go:172] (0xc000562000) (3) Writing data frame\nI1223 02:32:06.921947    1415 log.go:172] (0xc000a18bb0) Data frame received for 5\nI1223 02:32:06.921965    1415 log.go:172] (0xc00063fcc0) (5) Data frame handling\nI1223 02:32:06.921976    1415 log.go:172] (0xc00063fcc0) (5) Data frame sent\nI1223 02:32:06.922535    1415 log.go:172] (0xc000a18bb0) Data frame received for 5\nI1223 02:32:06.922557    1415 log.go:172] (0xc00063fcc0) (5) Data frame handling\nI1223 02:32:06.922573    1415 log.go:172] (0xc00063fcc0) (5) Data frame sent\nI1223 02:32:06.979933    1415 log.go:172] (0xc000a18bb0) Data frame received for 7\nI1223 02:32:06.979957    1415 log.go:172] (0xc00063fd60) (7) Data frame handling\nI1223 02:32:06.979971    1415 log.go:172] (0xc000a18bb0) Data frame received for 5\nI1223 02:32:06.979977    1415 log.go:172] (0xc00063fcc0) (5) Data frame handling\nI1223 02:32:06.980399    1415 log.go:172] (0xc000a18bb0) Data frame received for 1\nI1223 02:32:06.980454    1415 log.go:172] (0xc000a18bb0) (0xc000562000) Stream removed, broadcasting: 3\nI1223 02:32:06.980494    1415 log.go:172] (0xc00063fc20) (1) Data frame handling\nI1223 02:32:06.980525    1415 log.go:172] (0xc00063fc20) (1) Data frame sent\nI1223 02:32:06.980547    1415 log.go:172] (0xc000a18bb0) (0xc00063fc20) Stream removed, broadcasting: 1\nI1223 02:32:06.980567    1415 log.go:172] (0xc000a18bb0) Go away received\nI1223 02:32:06.980978    1415 log.go:172] (0xc000a18bb0) (0xc00063fc20) Stream removed, broadcasting: 1\nI1223 02:32:06.980992    1415 log.go:172] (0xc000a18bb0) (0xc000562000) Stream removed, broadcasting: 3\nI1223 02:32:06.980997    1415 log.go:172] (0xc000a18bb0) (0xc00063fcc0) Stream removed, broadcasting: 5\nI1223 02:32:06.981002    1415 log.go:172] (0xc000a18bb0) (0xc00063fd60) Stream removed, broadcasting: 7\n"
Dec 23 02:32:07.014: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:09.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7198" for this suite.

• [SLOW TEST:5.646 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1843
    should create a job from an image, then delete the job [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Deprecated] [Conformance]","total":278,"completed":172,"skipped":2540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:09.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 02:32:09.083: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210" in namespace "downward-api-8284" to be "success or failure"
Dec 23 02:32:09.087: INFO: Pod "downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210": Phase="Pending", Reason="", readiness=false. Elapsed: 3.42996ms
Dec 23 02:32:11.091: INFO: Pod "downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00749679s
Dec 23 02:32:13.095: INFO: Pod "downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011671304s
STEP: Saw pod success
Dec 23 02:32:13.095: INFO: Pod "downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210" satisfied condition "success or failure"
Dec 23 02:32:13.098: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210 container client-container: 
STEP: delete the pod
Dec 23 02:32:13.146: INFO: Waiting for pod downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210 to disappear
Dec 23 02:32:13.159: INFO: Pod downwardapi-volume-1ac2db99-e23f-41f0-8a6c-af9cb13e2210 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:13.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8284" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2607,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:13.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 23 02:32:13.206: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 02:32:13.242: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 02:32:13.245: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Dec 23 02:32:13.250: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.250: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 02:32:13.250: INFO: chaos-daemon-r2kj7 from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.250: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 02:32:13.250: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.250: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 02:32:13.250: INFO: chaos-controller-manager-7f9bbd476f-jm8nf from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.250: INFO: 	Container chaos-mesh ready: true, restart count 0
Dec 23 02:32:13.250: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Dec 23 02:32:13.268: INFO: chaos-daemon-mzgg5 from default started at 2020-11-22 21:56:28 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.268: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 02:32:13.268: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.268: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 02:32:13.268: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.268: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 02:32:13.268: INFO: e2e-test-rm-busybox-job-9xs4r from kubectl-7198 started at 2020-12-23 02:32:03 +0000 UTC (1 container statuses recorded)
Dec 23 02:32:13.268: INFO: 	Container e2e-test-rm-busybox-job ready: false, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2df382cf-a09f-49a6-a419-b31c674aadd6 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-2df382cf-a09f-49a6-a419-b31c674aadd6 off the node jerma-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2df382cf-a09f-49a6-a419-b31c674aadd6
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:21.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3551" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:8.232 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":174,"skipped":2616,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:21.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:32:21.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-709'
Dec 23 02:32:21.812: INFO: stderr: ""
Dec 23 02:32:21.812: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Dec 23 02:32:21.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-709'
Dec 23 02:32:22.190: INFO: stderr: ""
Dec 23 02:32:22.190: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Dec 23 02:32:23.194: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:32:23.194: INFO: Found 0 / 1
Dec 23 02:32:24.217: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:32:24.217: INFO: Found 0 / 1
Dec 23 02:32:25.194: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:32:25.194: INFO: Found 1 / 1
Dec 23 02:32:25.194: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 23 02:32:25.198: INFO: Selector matched 1 pods for map[app:agnhost]
Dec 23 02:32:25.198: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 23 02:32:25.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-56ft7 --namespace=kubectl-709'
Dec 23 02:32:25.440: INFO: stderr: ""
Dec 23 02:32:25.440: INFO: stdout: "Name:         agnhost-master-56ft7\nNamespace:    kubectl-709\nPriority:     0\nNode:         jerma-worker/172.18.0.9\nStart Time:   Wed, 23 Dec 2020 02:32:21 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.2.98\nIPs:\n  IP:           10.244.2.98\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://b96b66c416e4a6f087529e253fb3fb975ff74b95c8d708011d060b6d3d8a9c2c\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 23 Dec 2020 02:32:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-m5g4b (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-m5g4b:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-m5g4b\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  3s    default-scheduler      Successfully assigned kubectl-709/agnhost-master-56ft7 to jerma-worker\n  Normal  Pulled     2s    kubelet, jerma-worker  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s    kubelet, jerma-worker  Created container agnhost-master\n  Normal  Started    1s    kubelet, jerma-worker  Started container agnhost-master\n"
Dec 23 02:32:25.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-709'
Dec 23 02:32:25.559: INFO: stderr: ""
Dec 23 02:32:25.559: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-709\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  4s    replication-controller  Created pod: agnhost-master-56ft7\n"
Dec 23 02:32:25.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-709'
Dec 23 02:32:25.658: INFO: stderr: ""
Dec 23 02:32:25.658: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-709\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.109.142.145\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.2.98:6379\nSession Affinity:  None\nEvents:            \n"
Dec 23 02:32:25.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane'
Dec 23 02:32:25.796: INFO: stderr: ""
Dec 23 02:32:25.796: INFO: stdout: "Name:               jerma-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 23 Sep 2020 08:26:58 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-control-plane\n  AcquireTime:     \n  RenewTime:       Wed, 23 Dec 2020 02:32:16 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 23 Dec 2020 02:31:41 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 23 Dec 2020 02:31:41 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 23 Dec 2020 02:31:41 +0000   Wed, 23 Sep 2020 08:26:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 23 Dec 2020 02:31:41 +0000   Wed, 23 Sep 2020 08:27:23 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.8\n  Hostname:    jerma-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759868Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759868Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 fe2aca8844154d87b6440058d7a6a967\n  System UUID:                dfffb871-82a7-49e8-b93c-4170ac55bd08\n  Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n  Kernel Version:             4.15.0-118-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.17.5\n  Kube-Proxy Version:         v1.17.5\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-6955765f44-7bd2n                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     90d\n  kube-system                 coredns-6955765f44-bxgn5                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     90d\n  kube-system                 etcd-jerma-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d\n  kube-system                 kindnet-cv4pq                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      90d\n  kube-system                 kube-apiserver-jerma-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         90d\n  kube-system                 kube-controller-manager-jerma-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         90d\n  kube-system                 kube-proxy-vr8mk                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d\n  kube-system                 kube-scheduler-jerma-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         90d\n  local-path-storage          local-path-provisioner-58f6947c7-wgwst         0 (0%)        0 (0%)      0 (0%)           0 (0%)         90d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Dec 23 02:32:25.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-709'
Dec 23 02:32:25.908: INFO: stderr: ""
Dec 23 02:32:25.908: INFO: stdout: "Name:         kubectl-709\nLabels:       e2e-framework=kubectl\n              e2e-run=70cea070-a5b4-4bdd-8919-5a7a8b6a0ca0\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:32:25.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-709" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":175,"skipped":2631,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:32:25.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-2344
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Dec 23 02:32:26.055: INFO: Found 0 stateful pods, waiting for 3
Dec 23 02:32:36.060: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:32:36.060: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:32:36.060: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Dec 23 02:32:46.060: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:32:46.060: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:32:46.060: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:32:46.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2344 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:32:46.350: INFO: stderr: "I1223 02:32:46.195929    1586 log.go:172] (0xc0000f62c0) (0xc0004566e0) Create stream\nI1223 02:32:46.195980    1586 log.go:172] (0xc0000f62c0) (0xc0004566e0) Stream added, broadcasting: 1\nI1223 02:32:46.198184    1586 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI1223 02:32:46.198245    1586 log.go:172] (0xc0000f62c0) (0xc000972000) Create stream\nI1223 02:32:46.198262    1586 log.go:172] (0xc0000f62c0) (0xc000972000) Stream added, broadcasting: 3\nI1223 02:32:46.198994    1586 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI1223 02:32:46.199023    1586 log.go:172] (0xc0000f62c0) (0xc000546280) Create stream\nI1223 02:32:46.199032    1586 log.go:172] (0xc0000f62c0) (0xc000546280) Stream added, broadcasting: 5\nI1223 02:32:46.199617    1586 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI1223 02:32:46.286218    1586 log.go:172] (0xc0000f62c0) Data frame received for 5\nI1223 02:32:46.286257    1586 log.go:172] (0xc000546280) (5) Data frame handling\nI1223 02:32:46.286280    1586 log.go:172] (0xc000546280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:32:46.338215    1586 log.go:172] (0xc0000f62c0) Data frame received for 3\nI1223 02:32:46.338237    1586 log.go:172] (0xc000972000) (3) Data frame handling\nI1223 02:32:46.338244    1586 log.go:172] (0xc000972000) (3) Data frame sent\nI1223 02:32:46.338566    1586 log.go:172] (0xc0000f62c0) Data frame received for 3\nI1223 02:32:46.338623    1586 log.go:172] (0xc000972000) (3) Data frame handling\nI1223 02:32:46.338670    1586 log.go:172] (0xc0000f62c0) Data frame received for 5\nI1223 02:32:46.338705    1586 log.go:172] (0xc000546280) (5) Data frame handling\nI1223 02:32:46.340289    1586 log.go:172] (0xc0000f62c0) Data frame received for 1\nI1223 02:32:46.340418    1586 log.go:172] (0xc0004566e0) (1) Data frame handling\nI1223 02:32:46.340555    1586 log.go:172] (0xc0004566e0) (1) Data frame sent\nI1223 02:32:46.340602    1586 log.go:172] (0xc0000f62c0) (0xc0004566e0) Stream removed, broadcasting: 1\nI1223 02:32:46.340733    1586 log.go:172] (0xc0000f62c0) Go away received\nI1223 02:32:46.341231    1586 log.go:172] (0xc0000f62c0) (0xc0004566e0) Stream removed, broadcasting: 1\nI1223 02:32:46.341257    1586 log.go:172] (0xc0000f62c0) (0xc000972000) Stream removed, broadcasting: 3\nI1223 02:32:46.341269    1586 log.go:172] (0xc0000f62c0) (0xc000546280) Stream removed, broadcasting: 5\n"
Dec 23 02:32:46.350: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:32:46.350: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Dec 23 02:32:56.379: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 23 02:33:06.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2344 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:33:06.640: INFO: stderr: "I1223 02:33:06.516444    1607 log.go:172] (0xc0009286e0) (0xc0006dda40) Create stream\nI1223 02:33:06.516492    1607 log.go:172] (0xc0009286e0) (0xc0006dda40) Stream added, broadcasting: 1\nI1223 02:33:06.518848    1607 log.go:172] (0xc0009286e0) Reply frame received for 1\nI1223 02:33:06.518893    1607 log.go:172] (0xc0009286e0) (0xc000844000) Create stream\nI1223 02:33:06.518908    1607 log.go:172] (0xc0009286e0) (0xc000844000) Stream added, broadcasting: 3\nI1223 02:33:06.519825    1607 log.go:172] (0xc0009286e0) Reply frame received for 3\nI1223 02:33:06.519877    1607 log.go:172] (0xc0009286e0) (0xc000844140) Create stream\nI1223 02:33:06.519895    1607 log.go:172] (0xc0009286e0) (0xc000844140) Stream added, broadcasting: 5\nI1223 02:33:06.520751    1607 log.go:172] (0xc0009286e0) Reply frame received for 5\nI1223 02:33:06.630128    1607 log.go:172] (0xc0009286e0) Data frame received for 5\nI1223 02:33:06.630160    1607 log.go:172] (0xc000844140) (5) Data frame handling\nI1223 02:33:06.630180    1607 log.go:172] (0xc000844140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:33:06.630598    1607 log.go:172] (0xc0009286e0) Data frame received for 5\nI1223 02:33:06.630629    1607 log.go:172] (0xc000844140) (5) Data frame handling\nI1223 02:33:06.630877    1607 log.go:172] (0xc0009286e0) Data frame received for 3\nI1223 02:33:06.630899    1607 log.go:172] (0xc000844000) (3) Data frame handling\nI1223 02:33:06.630915    1607 log.go:172] (0xc000844000) (3) Data frame sent\nI1223 02:33:06.630930    1607 log.go:172] (0xc0009286e0) Data frame received for 3\nI1223 02:33:06.630938    1607 log.go:172] (0xc000844000) (3) Data frame handling\nI1223 02:33:06.632634    1607 log.go:172] (0xc0009286e0) Data frame received for 1\nI1223 02:33:06.632654    1607 log.go:172] (0xc0006dda40) (1) Data frame handling\nI1223 02:33:06.632668    1607 log.go:172] (0xc0006dda40) (1) Data frame sent\nI1223 02:33:06.632679    1607 log.go:172] (0xc0009286e0) (0xc0006dda40) Stream removed, broadcasting: 1\nI1223 02:33:06.632699    1607 log.go:172] (0xc0009286e0) Go away received\nI1223 02:33:06.633125    1607 log.go:172] (0xc0009286e0) (0xc0006dda40) Stream removed, broadcasting: 1\nI1223 02:33:06.633143    1607 log.go:172] (0xc0009286e0) (0xc000844000) Stream removed, broadcasting: 3\nI1223 02:33:06.633152    1607 log.go:172] (0xc0009286e0) (0xc000844140) Stream removed, broadcasting: 5\n"
Dec 23 02:33:06.640: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:33:06.640: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:33:26.698: INFO: Waiting for StatefulSet statefulset-2344/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 23 02:33:36.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2344 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:33:36.986: INFO: stderr: "I1223 02:33:36.837372    1628 log.go:172] (0xc000bb4000) (0xc000a78000) Create stream\nI1223 02:33:36.837443    1628 log.go:172] (0xc000bb4000) (0xc000a78000) Stream added, broadcasting: 1\nI1223 02:33:36.839669    1628 log.go:172] (0xc000bb4000) Reply frame received for 1\nI1223 02:33:36.839789    1628 log.go:172] (0xc000bb4000) (0xc000a780a0) Create stream\nI1223 02:33:36.839844    1628 log.go:172] (0xc000bb4000) (0xc000a780a0) Stream added, broadcasting: 3\nI1223 02:33:36.840805    1628 log.go:172] (0xc000bb4000) Reply frame received for 3\nI1223 02:33:36.840827    1628 log.go:172] (0xc000bb4000) (0xc000a781e0) Create stream\nI1223 02:33:36.840903    1628 log.go:172] (0xc000bb4000) (0xc000a781e0) Stream added, broadcasting: 5\nI1223 02:33:36.841891    1628 log.go:172] (0xc000bb4000) Reply frame received for 5\nI1223 02:33:36.943042    1628 log.go:172] (0xc000bb4000) Data frame received for 5\nI1223 02:33:36.943079    1628 log.go:172] (0xc000a781e0) (5) Data frame handling\nI1223 02:33:36.943104    1628 log.go:172] (0xc000a781e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:33:36.973923    1628 log.go:172] (0xc000bb4000) Data frame received for 3\nI1223 02:33:36.973959    1628 log.go:172] (0xc000a780a0) (3) Data frame handling\nI1223 02:33:36.973979    1628 log.go:172] (0xc000a780a0) (3) Data frame sent\nI1223 02:33:36.973996    1628 log.go:172] (0xc000bb4000) Data frame received for 3\nI1223 02:33:36.974013    1628 log.go:172] (0xc000a780a0) (3) Data frame handling\nI1223 02:33:36.974213    1628 log.go:172] (0xc000bb4000) Data frame received for 5\nI1223 02:33:36.974248    1628 log.go:172] (0xc000a781e0) (5) Data frame handling\nI1223 02:33:36.976471    1628 log.go:172] (0xc000bb4000) Data frame received for 1\nI1223 02:33:36.976572    1628 log.go:172] (0xc000a78000) (1) Data frame handling\nI1223 02:33:36.976611    1628 log.go:172] (0xc000a78000) (1) Data frame sent\nI1223 02:33:36.976634    1628 log.go:172] (0xc000bb4000) (0xc000a78000) Stream removed, broadcasting: 1\nI1223 02:33:36.976678    1628 log.go:172] (0xc000bb4000) Go away received\nI1223 02:33:36.977276    1628 log.go:172] (0xc000bb4000) (0xc000a78000) Stream removed, broadcasting: 1\nI1223 02:33:36.977306    1628 log.go:172] (0xc000bb4000) (0xc000a780a0) Stream removed, broadcasting: 3\nI1223 02:33:36.977320    1628 log.go:172] (0xc000bb4000) (0xc000a781e0) Stream removed, broadcasting: 5\n"
Dec 23 02:33:36.986: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:33:36.986: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 02:33:47.017: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 23 02:33:57.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2344 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:33:57.286: INFO: stderr: "I1223 02:33:57.192253    1649 log.go:172] (0xc000bd6000) (0xc0006b5ae0) Create stream\nI1223 02:33:57.192303    1649 log.go:172] (0xc000bd6000) (0xc0006b5ae0) Stream added, broadcasting: 1\nI1223 02:33:57.195122    1649 log.go:172] (0xc000bd6000) Reply frame received for 1\nI1223 02:33:57.195167    1649 log.go:172] (0xc000bd6000) (0xc000972000) Create stream\nI1223 02:33:57.195182    1649 log.go:172] (0xc000bd6000) (0xc000972000) Stream added, broadcasting: 3\nI1223 02:33:57.196130    1649 log.go:172] (0xc000bd6000) Reply frame received for 3\nI1223 02:33:57.196179    1649 log.go:172] (0xc000bd6000) (0xc000236000) Create stream\nI1223 02:33:57.196195    1649 log.go:172] (0xc000bd6000) (0xc000236000) Stream added, broadcasting: 5\nI1223 02:33:57.197273    1649 log.go:172] (0xc000bd6000) Reply frame received for 5\nI1223 02:33:57.279448    1649 log.go:172] (0xc000bd6000) Data frame received for 3\nI1223 02:33:57.279488    1649 log.go:172] (0xc000972000) (3) Data frame handling\nI1223 02:33:57.279511    1649 log.go:172] (0xc000972000) (3) Data frame sent\nI1223 02:33:57.279535    1649 log.go:172] (0xc000bd6000) Data frame received for 3\nI1223 02:33:57.279553    1649 log.go:172] (0xc000972000) (3) Data frame handling\nI1223 02:33:57.279804    1649 log.go:172] (0xc000bd6000) Data frame received for 5\nI1223 02:33:57.279828    1649 log.go:172] (0xc000236000) (5) Data frame handling\nI1223 02:33:57.279843    1649 log.go:172] (0xc000236000) (5) Data frame sent\nI1223 02:33:57.279858    1649 log.go:172] (0xc000bd6000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:33:57.279869    1649 log.go:172] (0xc000236000) (5) Data frame handling\nI1223 02:33:57.281374    1649 log.go:172] (0xc000bd6000) Data frame received for 1\nI1223 02:33:57.281393    1649 log.go:172] (0xc0006b5ae0) (1) Data frame handling\nI1223 02:33:57.281403    1649 log.go:172] (0xc0006b5ae0) (1) Data frame sent\nI1223 02:33:57.281425    1649 log.go:172] (0xc000bd6000) (0xc0006b5ae0) Stream removed, broadcasting: 1\nI1223 02:33:57.281452    1649 log.go:172] (0xc000bd6000) Go away received\nI1223 02:33:57.281866    1649 log.go:172] (0xc000bd6000) (0xc0006b5ae0) Stream removed, broadcasting: 1\nI1223 02:33:57.281890    1649 log.go:172] (0xc000bd6000) (0xc000972000) Stream removed, broadcasting: 3\nI1223 02:33:57.281901    1649 log.go:172] (0xc000bd6000) (0xc000236000) Stream removed, broadcasting: 5\n"
Dec 23 02:33:57.286: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:33:57.286: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:34:27.324: INFO: Waiting for StatefulSet statefulset-2344/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 02:34:37.332: INFO: Deleting all statefulset in ns statefulset-2344
Dec 23 02:34:37.335: INFO: Scaling statefulset ss2 to 0
Dec 23 02:35:07.353: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:35:07.355: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:07.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2344" for this suite.

• [SLOW TEST:161.459 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":176,"skipped":2652,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:07.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 23 02:35:07.435: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1892 /api/v1/namespaces/watch-1892/configmaps/e2e-watch-test-watch-closed 30499509-164a-4c0a-b505-f38e6ea7dfd6 23941136 0 2020-12-23 02:35:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 02:35:07.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1892 /api/v1/namespaces/watch-1892/configmaps/e2e-watch-test-watch-closed 30499509-164a-4c0a-b505-f38e6ea7dfd6 23941137 0 2020-12-23 02:35:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 23 02:35:07.488: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1892 /api/v1/namespaces/watch-1892/configmaps/e2e-watch-test-watch-closed 30499509-164a-4c0a-b505-f38e6ea7dfd6 23941138 0 2020-12-23 02:35:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 02:35:07.489: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1892 /api/v1/namespaces/watch-1892/configmaps/e2e-watch-test-watch-closed 30499509-164a-4c0a-b505-f38e6ea7dfd6 23941139 0 2020-12-23 02:35:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:07.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1892" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":177,"skipped":2670,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:07.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 23 02:35:07.542: INFO: Waiting up to 5m0s for pod "pod-c48515c5-c2a1-4879-82f9-309004d6b4c2" in namespace "emptydir-4703" to be "success or failure"
Dec 23 02:35:07.546: INFO: Pod "pod-c48515c5-c2a1-4879-82f9-309004d6b4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.263968ms
Dec 23 02:35:09.550: INFO: Pod "pod-c48515c5-c2a1-4879-82f9-309004d6b4c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007158355s
Dec 23 02:35:11.555: INFO: Pod "pod-c48515c5-c2a1-4879-82f9-309004d6b4c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01221965s
STEP: Saw pod success
Dec 23 02:35:11.555: INFO: Pod "pod-c48515c5-c2a1-4879-82f9-309004d6b4c2" satisfied condition "success or failure"
Dec 23 02:35:11.557: INFO: Trying to get logs from node jerma-worker pod pod-c48515c5-c2a1-4879-82f9-309004d6b4c2 container test-container: 
STEP: delete the pod
Dec 23 02:35:11.646: INFO: Waiting for pod pod-c48515c5-c2a1-4879-82f9-309004d6b4c2 to disappear
Dec 23 02:35:11.704: INFO: Pod pod-c48515c5-c2a1-4879-82f9-309004d6b4c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:11.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4703" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2675,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:11.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 23 02:35:11.824: INFO: Waiting up to 5m0s for pod "pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88" in namespace "emptydir-5654" to be "success or failure"
Dec 23 02:35:11.827: INFO: Pod "pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88": Phase="Pending", Reason="", readiness=false. Elapsed: 3.267103ms
Dec 23 02:35:13.852: INFO: Pod "pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027695241s
Dec 23 02:35:15.856: INFO: Pod "pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031601677s
STEP: Saw pod success
Dec 23 02:35:15.856: INFO: Pod "pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88" satisfied condition "success or failure"
Dec 23 02:35:15.858: INFO: Trying to get logs from node jerma-worker pod pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88 container test-container: 
STEP: delete the pod
Dec 23 02:35:15.892: INFO: Waiting for pod pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88 to disappear
Dec 23 02:35:15.906: INFO: Pod pod-4c161ab5-febb-41cc-bd36-0f99ed6fde88 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:15.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5654" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2675,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:15.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 23 02:35:16.058: INFO: Waiting up to 5m0s for pod "pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450" in namespace "emptydir-8919" to be "success or failure"
Dec 23 02:35:16.062: INFO: Pod "pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450": Phase="Pending", Reason="", readiness=false. Elapsed: 3.465067ms
Dec 23 02:35:18.112: INFO: Pod "pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05351788s
Dec 23 02:35:20.115: INFO: Pod "pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057274688s
STEP: Saw pod success
Dec 23 02:35:20.116: INFO: Pod "pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450" satisfied condition "success or failure"
Dec 23 02:35:20.119: INFO: Trying to get logs from node jerma-worker pod pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450 container test-container: 
STEP: delete the pod
Dec 23 02:35:20.154: INFO: Waiting for pod pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450 to disappear
Dec 23 02:35:20.157: INFO: Pod pod-e4537ac1-2cd0-4e34-8758-2ccedc28a450 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:20.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8919" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2684,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:20.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-3542
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 02:35:20.207: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 02:35:44.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.251:8080/dial?request=hostname&protocol=udp&host=10.244.2.106&port=8081&tries=1'] Namespace:pod-network-test-3542 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:35:44.283: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:35:44.303542       6 log.go:172] (0xc00248da20) (0xc001801f40) Create stream
I1223 02:35:44.303563       6 log.go:172] (0xc00248da20) (0xc001801f40) Stream added, broadcasting: 1
I1223 02:35:44.305021       6 log.go:172] (0xc00248da20) Reply frame received for 1
I1223 02:35:44.305055       6 log.go:172] (0xc00248da20) (0xc0018b94a0) Create stream
I1223 02:35:44.305068       6 log.go:172] (0xc00248da20) (0xc0018b94a0) Stream added, broadcasting: 3
I1223 02:35:44.305846       6 log.go:172] (0xc00248da20) Reply frame received for 3
I1223 02:35:44.305877       6 log.go:172] (0xc00248da20) (0xc0018b9720) Create stream
I1223 02:35:44.305890       6 log.go:172] (0xc00248da20) (0xc0018b9720) Stream added, broadcasting: 5
I1223 02:35:44.306906       6 log.go:172] (0xc00248da20) Reply frame received for 5
I1223 02:35:44.382024       6 log.go:172] (0xc00248da20) Data frame received for 3
I1223 02:35:44.382050       6 log.go:172] (0xc0018b94a0) (3) Data frame handling
I1223 02:35:44.382069       6 log.go:172] (0xc0018b94a0) (3) Data frame sent
I1223 02:35:44.382553       6 log.go:172] (0xc00248da20) Data frame received for 5
I1223 02:35:44.382574       6 log.go:172] (0xc0018b9720) (5) Data frame handling
I1223 02:35:44.382606       6 log.go:172] (0xc00248da20) Data frame received for 3
I1223 02:35:44.382643       6 log.go:172] (0xc0018b94a0) (3) Data frame handling
I1223 02:35:44.384224       6 log.go:172] (0xc00248da20) Data frame received for 1
I1223 02:35:44.384243       6 log.go:172] (0xc001801f40) (1) Data frame handling
I1223 02:35:44.384263       6 log.go:172] (0xc001801f40) (1) Data frame sent
I1223 02:35:44.384278       6 log.go:172] (0xc00248da20) (0xc001801f40) Stream removed, broadcasting: 1
I1223 02:35:44.384291       6 log.go:172] (0xc00248da20) Go away received
I1223 02:35:44.384421       6 log.go:172] (0xc00248da20) (0xc001801f40) Stream removed, broadcasting: 1
I1223 02:35:44.384442       6 log.go:172] (0xc00248da20) (0xc0018b94a0) Stream removed, broadcasting: 3
I1223 02:35:44.384449       6 log.go:172] (0xc00248da20) (0xc0018b9720) Stream removed, broadcasting: 5
Dec 23 02:35:44.384: INFO: Waiting for responses: map[]
Dec 23 02:35:44.387: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.251:8080/dial?request=hostname&protocol=udp&host=10.244.1.250&port=8081&tries=1'] Namespace:pod-network-test-3542 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:35:44.387: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:35:44.420632       6 log.go:172] (0xc002520bb0) (0xc002a00820) Create stream
I1223 02:35:44.420654       6 log.go:172] (0xc002520bb0) (0xc002a00820) Stream added, broadcasting: 1
I1223 02:35:44.422697       6 log.go:172] (0xc002520bb0) Reply frame received for 1
I1223 02:35:44.422773       6 log.go:172] (0xc002520bb0) (0xc002b76000) Create stream
I1223 02:35:44.422800       6 log.go:172] (0xc002520bb0) (0xc002b76000) Stream added, broadcasting: 3
I1223 02:35:44.423986       6 log.go:172] (0xc002520bb0) Reply frame received for 3
I1223 02:35:44.424057       6 log.go:172] (0xc002520bb0) (0xc002a00dc0) Create stream
I1223 02:35:44.424101       6 log.go:172] (0xc002520bb0) (0xc002a00dc0) Stream added, broadcasting: 5
I1223 02:35:44.425855       6 log.go:172] (0xc002520bb0) Reply frame received for 5
I1223 02:35:44.505893       6 log.go:172] (0xc002520bb0) Data frame received for 3
I1223 02:35:44.505945       6 log.go:172] (0xc002b76000) (3) Data frame handling
I1223 02:35:44.505972       6 log.go:172] (0xc002b76000) (3) Data frame sent
I1223 02:35:44.506708       6 log.go:172] (0xc002520bb0) Data frame received for 3
I1223 02:35:44.506736       6 log.go:172] (0xc002b76000) (3) Data frame handling
I1223 02:35:44.506773       6 log.go:172] (0xc002520bb0) Data frame received for 5
I1223 02:35:44.506787       6 log.go:172] (0xc002a00dc0) (5) Data frame handling
I1223 02:35:44.508261       6 log.go:172] (0xc002520bb0) Data frame received for 1
I1223 02:35:44.508282       6 log.go:172] (0xc002a00820) (1) Data frame handling
I1223 02:35:44.508294       6 log.go:172] (0xc002a00820) (1) Data frame sent
I1223 02:35:44.508325       6 log.go:172] (0xc002520bb0) (0xc002a00820) Stream removed, broadcasting: 1
I1223 02:35:44.508402       6 log.go:172] (0xc002520bb0) Go away received
I1223 02:35:44.508425       6 log.go:172] (0xc002520bb0) (0xc002a00820) Stream removed, broadcasting: 1
I1223 02:35:44.508443       6 log.go:172] (0xc002520bb0) (0xc002b76000) Stream removed, broadcasting: 3
I1223 02:35:44.508466       6 log.go:172] (0xc002520bb0) (0xc002a00dc0) Stream removed, broadcasting: 5
Dec 23 02:35:44.508: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:35:44.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3542" for this suite.

• [SLOW TEST:24.352 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2712,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:35:44.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:36:44.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3844" for this suite.

• [SLOW TEST:60.075 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2720,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:36:44.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9596
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9596
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9596
Dec 23 02:36:44.706: INFO: Found 0 stateful pods, waiting for 1
Dec 23 02:36:54.710: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 23 02:36:54.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:36:57.746: INFO: stderr: "I1223 02:36:57.589812    1669 log.go:172] (0xc000e1a000) (0xc0004c6960) Create stream\nI1223 02:36:57.589847    1669 log.go:172] (0xc000e1a000) (0xc0004c6960) Stream added, broadcasting: 1\nI1223 02:36:57.592257    1669 log.go:172] (0xc000e1a000) Reply frame received for 1\nI1223 02:36:57.592291    1669 log.go:172] (0xc000e1a000) (0xc0004c6a00) Create stream\nI1223 02:36:57.592302    1669 log.go:172] (0xc000e1a000) (0xc0004c6a00) Stream added, broadcasting: 3\nI1223 02:36:57.593197    1669 log.go:172] (0xc000e1a000) Reply frame received for 3\nI1223 02:36:57.593230    1669 log.go:172] (0xc000e1a000) (0xc000639b80) Create stream\nI1223 02:36:57.593238    1669 log.go:172] (0xc000e1a000) (0xc000639b80) Stream added, broadcasting: 5\nI1223 02:36:57.593923    1669 log.go:172] (0xc000e1a000) Reply frame received for 5\nI1223 02:36:57.678596    1669 log.go:172] (0xc000e1a000) Data frame received for 5\nI1223 02:36:57.678617    1669 log.go:172] (0xc000639b80) (5) Data frame handling\nI1223 02:36:57.678629    1669 log.go:172] (0xc000639b80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:36:57.733142    1669 log.go:172] (0xc000e1a000) Data frame received for 3\nI1223 02:36:57.733205    1669 log.go:172] (0xc0004c6a00) (3) Data frame handling\nI1223 02:36:57.733225    1669 log.go:172] (0xc0004c6a00) (3) Data frame sent\nI1223 02:36:57.733260    1669 log.go:172] (0xc000e1a000) Data frame received for 5\nI1223 02:36:57.733279    1669 log.go:172] (0xc000639b80) (5) Data frame handling\nI1223 02:36:57.733424    1669 log.go:172] (0xc000e1a000) Data frame received for 3\nI1223 02:36:57.733466    1669 log.go:172] (0xc0004c6a00) (3) Data frame handling\nI1223 02:36:57.735630    1669 log.go:172] (0xc000e1a000) Data frame received for 1\nI1223 02:36:57.735649    1669 log.go:172] (0xc0004c6960) (1) Data frame handling\nI1223 02:36:57.735657    1669 log.go:172] (0xc0004c6960) (1) Data frame sent\nI1223 02:36:57.735668    1669 log.go:172] (0xc000e1a000) (0xc0004c6960) Stream removed, broadcasting: 1\nI1223 02:36:57.735683    1669 log.go:172] (0xc000e1a000) Go away received\nI1223 02:36:57.738081    1669 log.go:172] (0xc000e1a000) (0xc0004c6960) Stream removed, broadcasting: 1\nI1223 02:36:57.738107    1669 log.go:172] (0xc000e1a000) (0xc0004c6a00) Stream removed, broadcasting: 3\nI1223 02:36:57.738121    1669 log.go:172] (0xc000e1a000) (0xc000639b80) Stream removed, broadcasting: 5\n"
Dec 23 02:36:57.746: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:36:57.746: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 02:36:57.749: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 23 02:37:07.754: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 02:37:07.754: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:37:07.770: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999742s
Dec 23 02:37:08.775: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993074835s
Dec 23 02:37:09.779: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988427558s
Dec 23 02:37:10.783: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.984382508s
Dec 23 02:37:11.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979911014s
Dec 23 02:37:12.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976206154s
Dec 23 02:37:13.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971665496s
Dec 23 02:37:14.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.968059703s
Dec 23 02:37:15.804: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963795214s
Dec 23 02:37:16.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.070128ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9596
Dec 23 02:37:17.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:37:18.069: INFO: stderr: "I1223 02:37:17.965900    1702 log.go:172] (0xc000a7c000) (0xc0006de780) Create stream\nI1223 02:37:17.965968    1702 log.go:172] (0xc000a7c000) (0xc0006de780) Stream added, broadcasting: 1\nI1223 02:37:17.968596    1702 log.go:172] (0xc000a7c000) Reply frame received for 1\nI1223 02:37:17.968674    1702 log.go:172] (0xc000a7c000) (0xc000727ea0) Create stream\nI1223 02:37:17.968701    1702 log.go:172] (0xc000a7c000) (0xc000727ea0) Stream added, broadcasting: 3\nI1223 02:37:17.969802    1702 log.go:172] (0xc000a7c000) Reply frame received for 3\nI1223 02:37:17.969849    1702 log.go:172] (0xc000a7c000) (0xc000727f40) Create stream\nI1223 02:37:17.969871    1702 log.go:172] (0xc000a7c000) (0xc000727f40) Stream added, broadcasting: 5\nI1223 02:37:17.970627    1702 log.go:172] (0xc000a7c000) Reply frame received for 5\nI1223 02:37:18.058970    1702 log.go:172] (0xc000a7c000) Data frame received for 5\nI1223 02:37:18.059029    1702 log.go:172] (0xc000727f40) (5) Data frame handling\nI1223 02:37:18.059041    1702 log.go:172] (0xc000727f40) (5) Data frame sent\nI1223 02:37:18.059052    1702 log.go:172] (0xc000a7c000) Data frame received for 5\nI1223 02:37:18.059060    1702 log.go:172] (0xc000727f40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:37:18.059103    1702 log.go:172] (0xc000a7c000) Data frame received for 3\nI1223 02:37:18.059147    1702 log.go:172] (0xc000727ea0) (3) Data frame handling\nI1223 02:37:18.059167    1702 log.go:172] (0xc000727ea0) (3) Data frame sent\nI1223 02:37:18.059189    1702 log.go:172] (0xc000a7c000) Data frame received for 3\nI1223 02:37:18.059202    1702 log.go:172] (0xc000727ea0) (3) Data frame handling\nI1223 02:37:18.060514    1702 log.go:172] (0xc000a7c000) Data frame received for 1\nI1223 02:37:18.060533    1702 log.go:172] (0xc0006de780) (1) Data frame handling\nI1223 02:37:18.060554    1702 log.go:172] (0xc0006de780) (1) Data frame sent\nI1223 02:37:18.060572    1702 log.go:172] (0xc000a7c000) (0xc0006de780) Stream removed, broadcasting: 1\nI1223 02:37:18.060586    1702 log.go:172] (0xc000a7c000) Go away received\nI1223 02:37:18.061018    1702 log.go:172] (0xc000a7c000) (0xc0006de780) Stream removed, broadcasting: 1\nI1223 02:37:18.061056    1702 log.go:172] (0xc000a7c000) (0xc000727ea0) Stream removed, broadcasting: 3\nI1223 02:37:18.061069    1702 log.go:172] (0xc000a7c000) (0xc000727f40) Stream removed, broadcasting: 5\n"
Dec 23 02:37:18.069: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:37:18.069: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:37:18.072: INFO: Found 1 stateful pods, waiting for 3
Dec 23 02:37:28.077: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:37:28.077: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:37:28.077: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 23 02:37:28.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:37:28.343: INFO: stderr: "I1223 02:37:28.227282    1724 log.go:172] (0xc0009a8790) (0xc0009d6000) Create stream\nI1223 02:37:28.227325    1724 log.go:172] (0xc0009a8790) (0xc0009d6000) Stream added, broadcasting: 1\nI1223 02:37:28.229972    1724 log.go:172] (0xc0009a8790) Reply frame received for 1\nI1223 02:37:28.230017    1724 log.go:172] (0xc0009a8790) (0xc0009d60a0) Create stream\nI1223 02:37:28.230031    1724 log.go:172] (0xc0009a8790) (0xc0009d60a0) Stream added, broadcasting: 3\nI1223 02:37:28.231215    1724 log.go:172] (0xc0009a8790) Reply frame received for 3\nI1223 02:37:28.231346    1724 log.go:172] (0xc0009a8790) (0xc0009ec000) Create stream\nI1223 02:37:28.231362    1724 log.go:172] (0xc0009a8790) (0xc0009ec000) Stream added, broadcasting: 5\nI1223 02:37:28.232409    1724 log.go:172] (0xc0009a8790) Reply frame received for 5\nI1223 02:37:28.330633    1724 log.go:172] (0xc0009a8790) Data frame received for 5\nI1223 02:37:28.330683    1724 log.go:172] (0xc0009ec000) (5) Data frame handling\nI1223 02:37:28.330699    1724 log.go:172] (0xc0009ec000) (5) Data frame sent\nI1223 02:37:28.330711    1724 log.go:172] (0xc0009a8790) Data frame received for 5\nI1223 02:37:28.330720    1724 log.go:172] (0xc0009ec000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:37:28.330735    1724 log.go:172] (0xc0009a8790) Data frame received for 3\nI1223 02:37:28.330827    1724 log.go:172] (0xc0009d60a0) (3) Data frame handling\nI1223 02:37:28.330865    1724 log.go:172] (0xc0009d60a0) (3) Data frame sent\nI1223 02:37:28.330920    1724 log.go:172] (0xc0009a8790) Data frame received for 3\nI1223 02:37:28.330943    1724 log.go:172] (0xc0009d60a0) (3) Data frame handling\nI1223 02:37:28.332707    1724 log.go:172] (0xc0009a8790) Data frame received for 1\nI1223 02:37:28.332743    1724 log.go:172] (0xc0009d6000) (1) Data frame handling\nI1223 02:37:28.332758    1724 log.go:172] (0xc0009d6000) (1) Data frame sent\nI1223 02:37:28.332771    1724 log.go:172] (0xc0009a8790) (0xc0009d6000) Stream removed, broadcasting: 1\nI1223 02:37:28.332786    1724 log.go:172] (0xc0009a8790) Go away received\nI1223 02:37:28.333310    1724 log.go:172] (0xc0009a8790) (0xc0009d6000) Stream removed, broadcasting: 1\nI1223 02:37:28.333331    1724 log.go:172] (0xc0009a8790) (0xc0009d60a0) Stream removed, broadcasting: 3\nI1223 02:37:28.333341    1724 log.go:172] (0xc0009a8790) (0xc0009ec000) Stream removed, broadcasting: 5\n"
Dec 23 02:37:28.343: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:37:28.343: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 02:37:28.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:37:28.589: INFO: stderr: "I1223 02:37:28.476463    1746 log.go:172] (0xc0001f40b0) (0xc0009a20a0) Create stream\nI1223 02:37:28.476527    1746 log.go:172] (0xc0001f40b0) (0xc0009a20a0) Stream added, broadcasting: 1\nI1223 02:37:28.479414    1746 log.go:172] (0xc0001f40b0) Reply frame received for 1\nI1223 02:37:28.479468    1746 log.go:172] (0xc0001f40b0) (0xc0008f6000) Create stream\nI1223 02:37:28.479481    1746 log.go:172] (0xc0001f40b0) (0xc0008f6000) Stream added, broadcasting: 3\nI1223 02:37:28.480824    1746 log.go:172] (0xc0001f40b0) Reply frame received for 3\nI1223 02:37:28.480967    1746 log.go:172] (0xc0001f40b0) (0xc0009a21e0) Create stream\nI1223 02:37:28.480992    1746 log.go:172] (0xc0001f40b0) (0xc0009a21e0) Stream added, broadcasting: 5\nI1223 02:37:28.482098    1746 log.go:172] (0xc0001f40b0) Reply frame received for 5\nI1223 02:37:28.547126    1746 log.go:172] (0xc0001f40b0) Data frame received for 5\nI1223 02:37:28.547159    1746 log.go:172] (0xc0009a21e0) (5) Data frame handling\nI1223 02:37:28.547185    1746 log.go:172] (0xc0009a21e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:37:28.578892    1746 log.go:172] (0xc0001f40b0) Data frame received for 3\nI1223 02:37:28.578925    1746 log.go:172] (0xc0008f6000) (3) Data frame handling\nI1223 02:37:28.578938    1746 log.go:172] (0xc0008f6000) (3) Data frame sent\nI1223 02:37:28.578946    1746 log.go:172] (0xc0001f40b0) Data frame received for 3\nI1223 02:37:28.578952    1746 log.go:172] (0xc0008f6000) (3) Data frame handling\nI1223 02:37:28.578983    1746 log.go:172] (0xc0001f40b0) Data frame received for 5\nI1223 02:37:28.578996    1746 log.go:172] (0xc0009a21e0) (5) Data frame handling\nI1223 02:37:28.581186    1746 log.go:172] (0xc0001f40b0) Data frame received for 1\nI1223 02:37:28.581209    1746 log.go:172] (0xc0009a20a0) (1) Data frame handling\nI1223 02:37:28.581223    1746 log.go:172] (0xc0009a20a0) (1) Data frame sent\nI1223 02:37:28.581237    1746 log.go:172] (0xc0001f40b0) (0xc0009a20a0) Stream removed, broadcasting: 1\nI1223 02:37:28.581406    1746 log.go:172] (0xc0001f40b0) Go away received\nI1223 02:37:28.581508    1746 log.go:172] (0xc0001f40b0) (0xc0009a20a0) Stream removed, broadcasting: 1\nI1223 02:37:28.581524    1746 log.go:172] (0xc0001f40b0) (0xc0008f6000) Stream removed, broadcasting: 3\nI1223 02:37:28.581535    1746 log.go:172] (0xc0001f40b0) (0xc0009a21e0) Stream removed, broadcasting: 5\n"
Dec 23 02:37:28.589: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:37:28.589: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 02:37:28.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 02:37:28.818: INFO: stderr: "I1223 02:37:28.724385    1768 log.go:172] (0xc0000f8d10) (0xc000932000) Create stream\nI1223 02:37:28.724455    1768 log.go:172] (0xc0000f8d10) (0xc000932000) Stream added, broadcasting: 1\nI1223 02:37:28.727744    1768 log.go:172] (0xc0000f8d10) Reply frame received for 1\nI1223 02:37:28.727792    1768 log.go:172] (0xc0000f8d10) (0xc0006659a0) Create stream\nI1223 02:37:28.727808    1768 log.go:172] (0xc0000f8d10) (0xc0006659a0) Stream added, broadcasting: 3\nI1223 02:37:28.729332    1768 log.go:172] (0xc0000f8d10) Reply frame received for 3\nI1223 02:37:28.729370    1768 log.go:172] (0xc0000f8d10) (0xc000665c20) Create stream\nI1223 02:37:28.729378    1768 log.go:172] (0xc0000f8d10) (0xc000665c20) Stream added, broadcasting: 5\nI1223 02:37:28.730362    1768 log.go:172] (0xc0000f8d10) Reply frame received for 5\nI1223 02:37:28.775968    1768 log.go:172] (0xc0000f8d10) Data frame received for 5\nI1223 02:37:28.775999    1768 log.go:172] (0xc000665c20) (5) Data frame handling\nI1223 02:37:28.776021    1768 log.go:172] (0xc000665c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 02:37:28.806813    1768 log.go:172] (0xc0000f8d10) Data frame received for 5\nI1223 02:37:28.806841    1768 log.go:172] (0xc000665c20) (5) Data frame handling\nI1223 02:37:28.806857    1768 log.go:172] (0xc0000f8d10) Data frame received for 3\nI1223 02:37:28.806863    1768 log.go:172] (0xc0006659a0) (3) Data frame handling\nI1223 02:37:28.806873    1768 log.go:172] (0xc0006659a0) (3) Data frame sent\nI1223 02:37:28.806879    1768 log.go:172] (0xc0000f8d10) Data frame received for 3\nI1223 02:37:28.806883    1768 log.go:172] (0xc0006659a0) (3) Data frame handling\nI1223 02:37:28.808769    1768 log.go:172] (0xc0000f8d10) Data frame received for 1\nI1223 02:37:28.808820    1768 log.go:172] (0xc000932000) (1) Data frame handling\nI1223 02:37:28.809002    1768 log.go:172] (0xc000932000) (1) Data frame sent\nI1223 02:37:28.809034    1768 log.go:172] (0xc0000f8d10) (0xc000932000) Stream removed, broadcasting: 1\nI1223 02:37:28.809071    1768 log.go:172] (0xc0000f8d10) Go away received\nI1223 02:37:28.809474    1768 log.go:172] (0xc0000f8d10) (0xc000932000) Stream removed, broadcasting: 1\nI1223 02:37:28.809493    1768 log.go:172] (0xc0000f8d10) (0xc0006659a0) Stream removed, broadcasting: 3\nI1223 02:37:28.809502    1768 log.go:172] (0xc0000f8d10) (0xc000665c20) Stream removed, broadcasting: 5\n"
Dec 23 02:37:28.818: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 02:37:28.818: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 02:37:28.818: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:37:28.821: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 23 02:37:38.828: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 02:37:38.828: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 02:37:38.828: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 02:37:38.844: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.9999995s
Dec 23 02:37:39.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990135521s
Dec 23 02:37:40.856: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986085966s
Dec 23 02:37:41.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978141557s
Dec 23 02:37:42.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.974345444s
Dec 23 02:37:43.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969168461s
Dec 23 02:37:44.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.934993393s
Dec 23 02:37:45.909: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.930861528s
Dec 23 02:37:46.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.925304606s
Dec 23 02:37:47.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 921.168668ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9596
Dec 23 02:37:48.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:37:49.172: INFO: stderr: "I1223 02:37:49.058787    1789 log.go:172] (0xc0008926e0) (0xc000a80000) Create stream\nI1223 02:37:49.058848    1789 log.go:172] (0xc0008926e0) (0xc000a80000) Stream added, broadcasting: 1\nI1223 02:37:49.061605    1789 log.go:172] (0xc0008926e0) Reply frame received for 1\nI1223 02:37:49.061661    1789 log.go:172] (0xc0008926e0) (0xc000639b80) Create stream\nI1223 02:37:49.061678    1789 log.go:172] (0xc0008926e0) (0xc000639b80) Stream added, broadcasting: 3\nI1223 02:37:49.062740    1789 log.go:172] (0xc0008926e0) Reply frame received for 3\nI1223 02:37:49.062815    1789 log.go:172] (0xc0008926e0) (0xc000024000) Create stream\nI1223 02:37:49.062848    1789 log.go:172] (0xc0008926e0) (0xc000024000) Stream added, broadcasting: 5\nI1223 02:37:49.063817    1789 log.go:172] (0xc0008926e0) Reply frame received for 5\nI1223 02:37:49.158628    1789 log.go:172] (0xc0008926e0) Data frame received for 5\nI1223 02:37:49.158653    1789 log.go:172] (0xc000024000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:37:49.158682    1789 log.go:172] (0xc0008926e0) Data frame received for 3\nI1223 02:37:49.158730    1789 log.go:172] (0xc000639b80) (3) Data frame handling\nI1223 02:37:49.158748    1789 log.go:172] (0xc000639b80) (3) Data frame sent\nI1223 02:37:49.158761    1789 log.go:172] (0xc0008926e0) Data frame received for 3\nI1223 02:37:49.158789    1789 log.go:172] (0xc000639b80) (3) Data frame handling\nI1223 02:37:49.158817    1789 log.go:172] (0xc000024000) (5) Data frame sent\nI1223 02:37:49.158842    1789 log.go:172] (0xc0008926e0) Data frame received for 5\nI1223 02:37:49.158853    1789 log.go:172] (0xc000024000) (5) Data frame handling\nI1223 02:37:49.159952    1789 log.go:172] (0xc0008926e0) Data frame received for 1\nI1223 02:37:49.159968    1789 log.go:172] (0xc000a80000) (1) Data frame handling\nI1223 02:37:49.159983    1789 log.go:172] (0xc000a80000) (1) Data frame sent\nI1223 02:37:49.160047    1789 log.go:172] (0xc0008926e0) (0xc000a80000) Stream removed, broadcasting: 1\nI1223 02:37:49.160065    1789 log.go:172] (0xc0008926e0) Go away received\nI1223 02:37:49.160488    1789 log.go:172] (0xc0008926e0) (0xc000a80000) Stream removed, broadcasting: 1\nI1223 02:37:49.160507    1789 log.go:172] (0xc0008926e0) (0xc000639b80) Stream removed, broadcasting: 3\nI1223 02:37:49.160517    1789 log.go:172] (0xc0008926e0) (0xc000024000) Stream removed, broadcasting: 5\n"
Dec 23 02:37:49.172: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:37:49.172: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:37:49.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:37:49.367: INFO: stderr: "I1223 02:37:49.292333    1811 log.go:172] (0xc000114fd0) (0xc000aa01e0) Create stream\nI1223 02:37:49.292386    1811 log.go:172] (0xc000114fd0) (0xc000aa01e0) Stream added, broadcasting: 1\nI1223 02:37:49.295017    1811 log.go:172] (0xc000114fd0) Reply frame received for 1\nI1223 02:37:49.295055    1811 log.go:172] (0xc000114fd0) (0xc000a220a0) Create stream\nI1223 02:37:49.295066    1811 log.go:172] (0xc000114fd0) (0xc000a220a0) Stream added, broadcasting: 3\nI1223 02:37:49.295889    1811 log.go:172] (0xc000114fd0) Reply frame received for 3\nI1223 02:37:49.295910    1811 log.go:172] (0xc000114fd0) (0xc000aa0280) Create stream\nI1223 02:37:49.295915    1811 log.go:172] (0xc000114fd0) (0xc000aa0280) Stream added, broadcasting: 5\nI1223 02:37:49.296547    1811 log.go:172] (0xc000114fd0) Reply frame received for 5\nI1223 02:37:49.357939    1811 log.go:172] (0xc000114fd0) Data frame received for 3\nI1223 02:37:49.357965    1811 log.go:172] (0xc000a220a0) (3) Data frame handling\nI1223 02:37:49.357989    1811 log.go:172] (0xc000a220a0) (3) Data frame sent\nI1223 02:37:49.358000    1811 log.go:172] (0xc000114fd0) Data frame received for 3\nI1223 02:37:49.358016    1811 log.go:172] (0xc000a220a0) (3) Data frame handling\nI1223 02:37:49.358343    1811 log.go:172] (0xc000114fd0) Data frame received for 5\nI1223 02:37:49.358365    1811 log.go:172] (0xc000aa0280) (5) Data frame handling\nI1223 02:37:49.358374    1811 log.go:172] (0xc000aa0280) (5) Data frame sent\nI1223 02:37:49.358379    1811 log.go:172] (0xc000114fd0) Data frame received for 5\nI1223 02:37:49.358384    1811 log.go:172] (0xc000aa0280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:37:49.360111    1811 log.go:172] (0xc000114fd0) Data frame received for 1\nI1223 02:37:49.360143    1811 log.go:172] (0xc000aa01e0) (1) Data frame handling\nI1223 02:37:49.360163    1811 log.go:172] (0xc000aa01e0) (1) Data frame sent\nI1223 02:37:49.360189    1811 log.go:172] (0xc000114fd0) (0xc000aa01e0) Stream removed, broadcasting: 1\nI1223 02:37:49.360326    1811 log.go:172] (0xc000114fd0) Go away received\nI1223 02:37:49.361374    1811 log.go:172] (0xc000114fd0) (0xc000aa01e0) Stream removed, broadcasting: 1\nI1223 02:37:49.361413    1811 log.go:172] (0xc000114fd0) (0xc000a220a0) Stream removed, broadcasting: 3\nI1223 02:37:49.361435    1811 log.go:172] (0xc000114fd0) (0xc000aa0280) Stream removed, broadcasting: 5\n"
Dec 23 02:37:49.367: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:37:49.367: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:37:49.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9596 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 02:37:49.606: INFO: stderr: "I1223 02:37:49.511965    1831 log.go:172] (0xc000982000) (0xc000968000) Create stream\nI1223 02:37:49.512016    1831 log.go:172] (0xc000982000) (0xc000968000) Stream added, broadcasting: 1\nI1223 02:37:49.514663    1831 log.go:172] (0xc000982000) Reply frame received for 1\nI1223 02:37:49.514708    1831 log.go:172] (0xc000982000) (0xc000a26000) Create stream\nI1223 02:37:49.514727    1831 log.go:172] (0xc000982000) (0xc000a26000) Stream added, broadcasting: 3\nI1223 02:37:49.515596    1831 log.go:172] (0xc000982000) Reply frame received for 3\nI1223 02:37:49.515626    1831 log.go:172] (0xc000982000) (0xc0009680a0) Create stream\nI1223 02:37:49.515633    1831 log.go:172] (0xc000982000) (0xc0009680a0) Stream added, broadcasting: 5\nI1223 02:37:49.516368    1831 log.go:172] (0xc000982000) Reply frame received for 5\nI1223 02:37:49.601072    1831 log.go:172] (0xc000982000) Data frame received for 5\nI1223 02:37:49.601120    1831 log.go:172] (0xc0009680a0) (5) Data frame handling\nI1223 02:37:49.601135    1831 log.go:172] (0xc0009680a0) (5) Data frame sent\nI1223 02:37:49.601147    1831 log.go:172] (0xc000982000) Data frame received for 5\nI1223 02:37:49.601161    1831 log.go:172] (0xc0009680a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 02:37:49.601198    1831 log.go:172] (0xc000982000) Data frame received for 3\nI1223 02:37:49.601222    1831 log.go:172] (0xc000a26000) (3) Data frame handling\nI1223 02:37:49.601249    1831 log.go:172] (0xc000a26000) (3) Data frame sent\nI1223 02:37:49.601261    1831 log.go:172] (0xc000982000) Data frame received for 3\nI1223 02:37:49.601270    1831 log.go:172] (0xc000a26000) (3) Data frame handling\nI1223 02:37:49.602422    1831 log.go:172] (0xc000982000) Data frame received for 1\nI1223 02:37:49.602441    1831 log.go:172] (0xc000968000) (1) Data frame handling\nI1223 02:37:49.602452    1831 log.go:172] (0xc000968000) (1) Data frame sent\nI1223 02:37:49.602477    1831 log.go:172] (0xc000982000) (0xc000968000) Stream removed, broadcasting: 1\nI1223 02:37:49.602506    1831 log.go:172] (0xc000982000) Go away received\nI1223 02:37:49.602846    1831 log.go:172] (0xc000982000) (0xc000968000) Stream removed, broadcasting: 1\nI1223 02:37:49.602866    1831 log.go:172] (0xc000982000) (0xc000a26000) Stream removed, broadcasting: 3\nI1223 02:37:49.602874    1831 log.go:172] (0xc000982000) (0xc0009680a0) Stream removed, broadcasting: 5\n"
Dec 23 02:37:49.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 02:37:49.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 02:37:49.606: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 02:38:09.621: INFO: Deleting all statefulset in ns statefulset-9596
Dec 23 02:38:09.624: INFO: Scaling statefulset ss to 0
Dec 23 02:38:09.632: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:38:09.634: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:38:09.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9596" for this suite.

• [SLOW TEST:85.061 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":183,"skipped":2859,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:38:09.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
[It] should create a deployment from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 02:38:09.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-9442'
Dec 23 02:38:09.808: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 02:38:09.808: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634
Dec 23 02:38:11.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9442'
Dec 23 02:38:11.984: INFO: stderr: ""
Dec 23 02:38:11.984: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:38:11.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9442" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Deprecated] [Conformance]","total":278,"completed":184,"skipped":2872,"failed":0}
SSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:38:12.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:38:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4526" for this suite.

• [SLOW TEST:33.611 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2877,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:38:45.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-configmap-x9bj
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 02:38:45.857: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x9bj" in namespace "subpath-6063" to be "success or failure"
Dec 23 02:38:45.874: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.250798ms
Dec 23 02:38:47.892: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034205104s
Dec 23 02:38:49.896: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 4.038884426s
Dec 23 02:38:51.901: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 6.043396299s
Dec 23 02:38:53.905: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 8.047711298s
Dec 23 02:38:55.909: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 10.051855718s
Dec 23 02:38:57.914: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 12.056060506s
Dec 23 02:38:59.917: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 14.059936492s
Dec 23 02:39:01.921: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 16.063922958s
Dec 23 02:39:03.925: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 18.067018809s
Dec 23 02:39:05.935: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 20.0777083s
Dec 23 02:39:07.940: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Running", Reason="", readiness=true. Elapsed: 22.082578223s
Dec 23 02:39:09.945: INFO: Pod "pod-subpath-test-configmap-x9bj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.087792516s
STEP: Saw pod success
Dec 23 02:39:09.945: INFO: Pod "pod-subpath-test-configmap-x9bj" satisfied condition "success or failure"
Dec 23 02:39:09.949: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-x9bj container test-container-subpath-configmap-x9bj: 
STEP: delete the pod
Dec 23 02:39:09.995: INFO: Waiting for pod pod-subpath-test-configmap-x9bj to disappear
Dec 23 02:39:10.001: INFO: Pod pod-subpath-test-configmap-x9bj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x9bj
Dec 23 02:39:10.001: INFO: Deleting pod "pod-subpath-test-configmap-x9bj" in namespace "subpath-6063"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:10.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6063" for this suite.

• [SLOW TEST:24.271 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":186,"skipped":2898,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:10.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:39:10.069: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:11.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2660" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":187,"skipped":2912,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:11.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:39:12.056: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Dec 23 02:39:14.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287952, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287952, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287952, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287952, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:39:17.146: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:39:17.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:18.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5405" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:7.270 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":188,"skipped":2955,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:18.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 02:39:22.670: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9402" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2971,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:22.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:39:23.206: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:39:25.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287963, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287963, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287963, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744287963, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:39:28.347: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:38.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9425" for this suite.
STEP: Destroying namespace "webhook-9425-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.008 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":190,"skipped":3024,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:38.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1796
[It] should update a single-container pod's image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 02:39:38.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8427'
Dec 23 02:39:38.905: INFO: stderr: ""
Dec 23 02:39:38.905: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Dec 23 02:39:43.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8427 -o json'
Dec 23 02:39:44.060: INFO: stderr: ""
Dec 23 02:39:44.060: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-12-23T02:39:38Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-8427\",\n        \"resourceVersion\": \"23942689\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8427/pods/e2e-test-httpd-pod\",\n        \"uid\": \"f8cacbbc-b658-4bd4-89a2-5d9a648c3865\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-4j8j5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-4j8j5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-4j8j5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-12-23T02:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-12-23T02:39:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-12-23T02:39:41Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-12-23T02:39:38Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://97bf93be94dd55b3a84e695e1885ee225e2f64e05f7cb9709dbefe5bd3fbdb8d\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-12-23T02:39:41Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.10\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.3\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.3\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-12-23T02:39:38Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 23 02:39:44.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8427'
Dec 23 02:39:44.445: INFO: stderr: ""
Dec 23 02:39:44.445: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1801
Dec 23 02:39:44.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8427'
Dec 23 02:39:47.845: INFO: stderr: ""
Dec 23 02:39:47.845: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:47.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8427" for this suite.

• [SLOW TEST:9.153 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1792
    should update a single-container pod's image  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":191,"skipped":3048,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:47.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:39:48.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 23 02:39:50.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2983 create -f -'
Dec 23 02:39:54.207: INFO: stderr: ""
Dec 23 02:39:54.207: INFO: stdout: "e2e-test-crd-publish-openapi-542-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Dec 23 02:39:54.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2983 delete e2e-test-crd-publish-openapi-542-crds test-cr'
Dec 23 02:39:54.337: INFO: stderr: ""
Dec 23 02:39:54.337: INFO: stdout: "e2e-test-crd-publish-openapi-542-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Dec 23 02:39:54.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2983 apply -f -'
Dec 23 02:39:54.590: INFO: stderr: ""
Dec 23 02:39:54.590: INFO: stdout: "e2e-test-crd-publish-openapi-542-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Dec 23 02:39:54.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2983 delete e2e-test-crd-publish-openapi-542-crds test-cr'
Dec 23 02:39:54.713: INFO: stderr: ""
Dec 23 02:39:54.713: INFO: stdout: "e2e-test-crd-publish-openapi-542-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Dec 23 02:39:54.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-542-crds'
Dec 23 02:39:54.955: INFO: stderr: ""
Dec 23 02:39:54.955: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-542-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:39:57.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2983" for this suite.

• [SLOW TEST:10.034 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":192,"skipped":3056,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:39:57.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:325
[It] should scale a replication controller  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a replication controller
Dec 23 02:39:58.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3388'
Dec 23 02:39:58.366: INFO: stderr: ""
Dec 23 02:39:58.366: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 02:39:58.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:39:58.498: INFO: stderr: ""
Dec 23 02:39:58.498: INFO: stdout: "update-demo-nautilus-4sr98 update-demo-nautilus-66dxk "
Dec 23 02:39:58.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sr98 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:39:58.617: INFO: stderr: ""
Dec 23 02:39:58.617: INFO: stdout: ""
Dec 23 02:39:58.617: INFO: update-demo-nautilus-4sr98 is created but not running
Dec 23 02:40:03.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:03.723: INFO: stderr: ""
Dec 23 02:40:03.723: INFO: stdout: "update-demo-nautilus-4sr98 update-demo-nautilus-66dxk "
Dec 23 02:40:03.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sr98 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:03.805: INFO: stderr: ""
Dec 23 02:40:03.805: INFO: stdout: "true"
Dec 23 02:40:03.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sr98 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:03.901: INFO: stderr: ""
Dec 23 02:40:03.901: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:40:03.901: INFO: validating pod update-demo-nautilus-4sr98
Dec 23 02:40:03.905: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:40:03.905: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:40:03.905: INFO: update-demo-nautilus-4sr98 is verified up and running
Dec 23 02:40:03.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:03.991: INFO: stderr: ""
Dec 23 02:40:03.991: INFO: stdout: "true"
Dec 23 02:40:03.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:04.086: INFO: stderr: ""
Dec 23 02:40:04.086: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:40:04.086: INFO: validating pod update-demo-nautilus-66dxk
Dec 23 02:40:04.090: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:40:04.090: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:40:04.090: INFO: update-demo-nautilus-66dxk is verified up and running
STEP: scaling down the replication controller
Dec 23 02:40:04.092: INFO: scanned /root for discovery docs: 
Dec 23 02:40:04.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3388'
Dec 23 02:40:05.217: INFO: stderr: ""
Dec 23 02:40:05.217: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 02:40:05.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:05.317: INFO: stderr: ""
Dec 23 02:40:05.317: INFO: stdout: "update-demo-nautilus-4sr98 update-demo-nautilus-66dxk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 23 02:40:10.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:10.413: INFO: stderr: ""
Dec 23 02:40:10.413: INFO: stdout: "update-demo-nautilus-4sr98 update-demo-nautilus-66dxk "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 23 02:40:15.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:15.526: INFO: stderr: ""
Dec 23 02:40:15.526: INFO: stdout: "update-demo-nautilus-66dxk "
Dec 23 02:40:15.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:15.641: INFO: stderr: ""
Dec 23 02:40:15.641: INFO: stdout: "true"
Dec 23 02:40:15.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:15.740: INFO: stderr: ""
Dec 23 02:40:15.740: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:40:15.740: INFO: validating pod update-demo-nautilus-66dxk
Dec 23 02:40:15.743: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:40:15.743: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:40:15.743: INFO: update-demo-nautilus-66dxk is verified up and running
STEP: scaling up the replication controller
Dec 23 02:40:15.745: INFO: scanned /root for discovery docs: 
Dec 23 02:40:15.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3388'
Dec 23 02:40:16.860: INFO: stderr: ""
Dec 23 02:40:16.860: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 23 02:40:16.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:16.960: INFO: stderr: ""
Dec 23 02:40:16.960: INFO: stdout: "update-demo-nautilus-2vvq2 update-demo-nautilus-66dxk "
Dec 23 02:40:16.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vvq2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:17.050: INFO: stderr: ""
Dec 23 02:40:17.050: INFO: stdout: ""
Dec 23 02:40:17.050: INFO: update-demo-nautilus-2vvq2 is created but not running
Dec 23 02:40:22.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3388'
Dec 23 02:40:22.149: INFO: stderr: ""
Dec 23 02:40:22.149: INFO: stdout: "update-demo-nautilus-2vvq2 update-demo-nautilus-66dxk "
Dec 23 02:40:22.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vvq2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:22.245: INFO: stderr: ""
Dec 23 02:40:22.245: INFO: stdout: "true"
Dec 23 02:40:22.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2vvq2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:22.336: INFO: stderr: ""
Dec 23 02:40:22.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:40:22.336: INFO: validating pod update-demo-nautilus-2vvq2
Dec 23 02:40:22.339: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:40:22.340: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:40:22.340: INFO: update-demo-nautilus-2vvq2 is verified up and running
Dec 23 02:40:22.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:22.437: INFO: stderr: ""
Dec 23 02:40:22.437: INFO: stdout: "true"
Dec 23 02:40:22.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-66dxk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3388'
Dec 23 02:40:22.531: INFO: stderr: ""
Dec 23 02:40:22.531: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 23 02:40:22.531: INFO: validating pod update-demo-nautilus-66dxk
Dec 23 02:40:22.534: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 23 02:40:22.534: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 23 02:40:22.534: INFO: update-demo-nautilus-66dxk is verified up and running
STEP: using delete to clean up resources
Dec 23 02:40:22.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3388'
Dec 23 02:40:22.646: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 02:40:22.646: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 23 02:40:22.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3388'
Dec 23 02:40:22.746: INFO: stderr: "No resources found in kubectl-3388 namespace.\n"
Dec 23 02:40:22.746: INFO: stdout: ""
Dec 23 02:40:22.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3388 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 02:40:22.851: INFO: stderr: ""
Dec 23 02:40:22.851: INFO: stdout: "update-demo-nautilus-2vvq2\nupdate-demo-nautilus-66dxk\n"
Dec 23 02:40:23.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3388'
Dec 23 02:40:23.459: INFO: stderr: "No resources found in kubectl-3388 namespace.\n"
Dec 23 02:40:23.459: INFO: stdout: ""
Dec 23 02:40:23.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3388 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 23 02:40:23.537: INFO: stderr: ""
Dec 23 02:40:23.537: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:40:23.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3388" for this suite.

• [SLOW TEST:25.656 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:323
    should scale a replication controller  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":193,"skipped":3070,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:40:23.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6248 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6248;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6248 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6248;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6248.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6248.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6248.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6248.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6248.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6248.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6248.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 214.178.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.178.214_udp@PTR;check="$$(dig +tcp +noall +answer +search 214.178.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.178.214_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6248 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6248;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6248 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6248;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6248.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6248.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6248.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6248.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6248.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6248.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6248.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6248.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6248.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 214.178.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.178.214_udp@PTR;check="$$(dig +tcp +noall +answer +search 214.178.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.178.214_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:40:30.027: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.030: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.034: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.038: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.041: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.043: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.063: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.066: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.069: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.071: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.074: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.080: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.083: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:30.098: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:40:35.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.107: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.113: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.116: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.118: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.120: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.123: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.142: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.145: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.148: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.151: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.154: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.156: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.159: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.163: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:35.180: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:40:40.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.107: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.113: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.116: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.119: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.122: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.125: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.142: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.145: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.148: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.150: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.153: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.155: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.158: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.161: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:40.178: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:40:45.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.105: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.108: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.111: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.114: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.116: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.119: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.122: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.139: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.142: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.145: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.147: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.149: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.152: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.154: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.157: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:45.173: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:40:50.102: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.104: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.107: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.110: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.113: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.115: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.118: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.121: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.137: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.139: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.141: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.144: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.147: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.150: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.152: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.155: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:50.170: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:40:55.103: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.107: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.114: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.117: INFO: Unable to read wheezy_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.121: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.124: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.127: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.151: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.154: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.157: INFO: Unable to read jessie_udp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.161: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248 from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.164: INFO: Unable to read jessie_udp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.167: INFO: Unable to read jessie_tcp@dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.170: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc from pod dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a: the server could not find the requested resource (get pods dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a)
Dec 23 02:40:55.192: INFO: Lookups using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6248 wheezy_tcp@dns-test-service.dns-6248 wheezy_udp@dns-test-service.dns-6248.svc wheezy_tcp@dns-test-service.dns-6248.svc wheezy_udp@_http._tcp.dns-test-service.dns-6248.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6248.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6248 jessie_tcp@dns-test-service.dns-6248 jessie_udp@dns-test-service.dns-6248.svc jessie_tcp@dns-test-service.dns-6248.svc jessie_udp@_http._tcp.dns-test-service.dns-6248.svc jessie_tcp@_http._tcp.dns-test-service.dns-6248.svc]

Dec 23 02:41:00.176: INFO: DNS probes using dns-6248/dns-test-3e9eb0f1-8e74-4c07-b55e-428b6d50d52a succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:41:00.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6248" for this suite.

• [SLOW TEST:37.395 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":194,"skipped":3123,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:41:00.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a job from an image when restart is OnFailure [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 02:41:00.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9849'
Dec 23 02:41:01.099: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 02:41:01.099: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Dec 23 02:41:01.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9849'
Dec 23 02:41:01.218: INFO: stderr: ""
Dec 23 02:41:01.218: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:41:01.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9849" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Deprecated] [Conformance]","total":278,"completed":195,"skipped":3163,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:41:01.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:41:07.427: INFO: DNS probes using dns-test-be4ebbd5-bc73-41bb-bbc3-f1ca0236cdf7 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:41:13.535: INFO: File wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:13.538: INFO: File jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:13.538: INFO: Lookups using dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 failed for: [wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local]

Dec 23 02:41:18.543: INFO: File wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:18.546: INFO: File jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:18.546: INFO: Lookups using dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 failed for: [wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local]

Dec 23 02:41:23.543: INFO: File wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:23.546: INFO: File jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:23.546: INFO: Lookups using dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 failed for: [wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local]

Dec 23 02:41:28.542: INFO: File wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:28.546: INFO: File jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:28.546: INFO: Lookups using dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 failed for: [wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local]

Dec 23 02:41:33.543: INFO: File wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:33.546: INFO: File jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local from pod  dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 23 02:41:33.546: INFO: Lookups using dns-2252/dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 failed for: [wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local]

Dec 23 02:41:38.546: INFO: DNS probes using dns-test-5ca907e0-655e-4df0-b897-dddcb20a4b42 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2252.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2252.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:41:45.293: INFO: DNS probes using dns-test-4a2e5d23-6c37-4bcc-ae49-b9d297312baa succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:41:45.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2252" for this suite.

• [SLOW TEST:44.176 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":196,"skipped":3190,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:41:45.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:41:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3999" for this suite.

• [SLOW TEST:11.108 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":197,"skipped":3206,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:41:56.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 02:41:56.653: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7" in namespace "projected-164" to be "success or failure"
Dec 23 02:41:56.765: INFO: Pod "downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 111.997668ms
Dec 23 02:41:58.927: INFO: Pod "downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.273826918s
Dec 23 02:42:00.931: INFO: Pod "downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.278252509s
STEP: Saw pod success
Dec 23 02:42:00.931: INFO: Pod "downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7" satisfied condition "success or failure"
Dec 23 02:42:00.938: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7 container client-container: 
STEP: delete the pod
Dec 23 02:42:00.979: INFO: Waiting for pod downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7 to disappear
Dec 23 02:42:00.982: INFO: Pod downwardapi-volume-e1012f39-ce1c-49e5-acbb-50ab44523ad7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:42:00.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-164" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3225,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:42:00.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 23 02:42:05.068: INFO: &Pod{ObjectMeta:{send-events-f90ec185-de0e-4b4d-b507-1968b335f14a  events-5523 /api/v1/namespaces/events-5523/pods/send-events-f90ec185-de0e-4b4d-b507-1968b335f14a a7a5743b-556b-4829-94a1-7852e4dbe695 23943492 0 2020-12-23 02:42:01 +0000 UTC   map[name:foo time:30052156] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9kzp8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9kzp8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9kzp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:42:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:42:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:42:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:42:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.2.121,StartTime:2020-12-23 02:42:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:42:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://73b48d0cc52a2951985e5db4eaf3bd9ad77192b8dee580ef7c780c0b800cbd81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Dec 23 02:42:07.074: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 23 02:42:09.079: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:42:09.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5523" for this suite.

• [SLOW TEST:8.109 seconds]
[k8s.io] [sig-node] Events
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":199,"skipped":3235,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:42:09.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service multi-endpoint-test in namespace services-8549
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8549 to expose endpoints map[]
Dec 23 02:42:09.248: INFO: Get endpoints failed (13.450887ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 23 02:42:10.251: INFO: successfully validated that service multi-endpoint-test in namespace services-8549 exposes endpoints map[] (1.016372452s elapsed)
STEP: Creating pod pod1 in namespace services-8549
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8549 to expose endpoints map[pod1:[100]]
Dec 23 02:42:13.338: INFO: successfully validated that service multi-endpoint-test in namespace services-8549 exposes endpoints map[pod1:[100]] (3.079662357s elapsed)
STEP: Creating pod pod2 in namespace services-8549
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8549 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 23 02:42:16.538: INFO: successfully validated that service multi-endpoint-test in namespace services-8549 exposes endpoints map[pod1:[100] pod2:[101]] (3.19547012s elapsed)
STEP: Deleting pod pod1 in namespace services-8549
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8549 to expose endpoints map[pod2:[101]]
Dec 23 02:42:17.588: INFO: successfully validated that service multi-endpoint-test in namespace services-8549 exposes endpoints map[pod2:[101]] (1.044472157s elapsed)
STEP: Deleting pod pod2 in namespace services-8549
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8549 to expose endpoints map[]
Dec 23 02:42:18.601: INFO: successfully validated that service multi-endpoint-test in namespace services-8549 exposes endpoints map[] (1.009641304s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:42:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8549" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:9.894 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":200,"skipped":3245,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:42:18.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:42:19.059: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:42:25.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5967" for this suite.

• [SLOW TEST:6.055 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":201,"skipped":3251,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:42:25.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-118.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-118.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-118.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-118.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.136.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.136.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.136.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.136.151_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-118.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-118.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-118.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-118.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-118.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-118.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-118.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.136.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.136.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.136.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.136.151_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 23 02:42:31.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.227: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.230: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.254: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.260: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.262: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:31.285: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:42:36.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.294: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.298: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.300: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.319: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.322: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.325: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.327: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:36.349: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:42:41.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.294: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.298: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.301: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.325: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.328: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.330: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.332: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:41.348: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:42:46.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.293: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.296: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.298: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.317: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.320: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.323: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.327: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:46.345: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:42:51.289: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.293: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.297: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.300: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.322: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.326: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.328: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.331: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:51.351: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:42:56.290: INFO: Unable to read wheezy_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.295: INFO: Unable to read wheezy_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.298: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.301: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.322: INFO: Unable to read jessie_udp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.325: INFO: Unable to read jessie_tcp@dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.383: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.406: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local from pod dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe: the server could not find the requested resource (get pods dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe)
Dec 23 02:42:56.425: INFO: Lookups using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe failed for: [wheezy_udp@dns-test-service.dns-118.svc.cluster.local wheezy_tcp@dns-test-service.dns-118.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_udp@dns-test-service.dns-118.svc.cluster.local jessie_tcp@dns-test-service.dns-118.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-118.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-118.svc.cluster.local]

Dec 23 02:43:01.351: INFO: DNS probes using dns-118/dns-test-dc580f46-1cb0-4afa-906a-26e2a5a470fe succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:02.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-118" for this suite.

• [SLOW TEST:37.307 seconds]
[sig-network] DNS
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":202,"skipped":3257,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:02.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 23 02:43:02.425: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:14.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-679" for this suite.

• [SLOW TEST:11.957 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:14.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-map-44377b19-449e-4784-82ad-4286be690655
STEP: Creating a pod to test consume configMaps
Dec 23 02:43:14.375: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30" in namespace "projected-966" to be "success or failure"
Dec 23 02:43:14.430: INFO: Pod "pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30": Phase="Pending", Reason="", readiness=false. Elapsed: 55.345727ms
Dec 23 02:43:16.435: INFO: Pod "pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060089749s
Dec 23 02:43:18.439: INFO: Pod "pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064128622s
STEP: Saw pod success
Dec 23 02:43:18.439: INFO: Pod "pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30" satisfied condition "success or failure"
Dec 23 02:43:18.442: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:43:18.628: INFO: Waiting for pod pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30 to disappear
Dec 23 02:43:18.646: INFO: Pod pod-projected-configmaps-7da35532-dc08-4723-b172-3270017b6f30 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:18.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-966" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3282,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:18.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Lease
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:18.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-5990" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":205,"skipped":3309,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:18.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 23 02:43:19.102: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23943959 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 23 02:43:19.103: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23943960 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 23 02:43:19.103: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23943961 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 23 02:43:29.265: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23944016 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 23 02:43:29.265: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23944017 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 23 02:43:29.265: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-712 /api/v1/namespaces/watch-712/configmaps/e2e-watch-test-label-changed ed8ec31b-8973-43ac-81fc-58cac3286b34 23944018 0 2020-12-23 02:43:19 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:29.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-712" for this suite.

• [SLOW TEST:10.343 seconds]
[sig-api-machinery] Watchers
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":206,"skipped":3329,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:29.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:34.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9" for this suite.

• [SLOW TEST:5.183 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":207,"skipped":3347,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:34.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:48.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7167" for this suite.

• [SLOW TEST:14.061 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":208,"skipped":3352,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:48.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-8f5a32f0-4985-453d-b886-14ab0f53c9b3
STEP: Creating a pod to test consume secrets
Dec 23 02:43:48.720: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8" in namespace "projected-2433" to be "success or failure"
Dec 23 02:43:48.790: INFO: Pod "pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8": Phase="Pending", Reason="", readiness=false. Elapsed: 70.599486ms
Dec 23 02:43:50.802: INFO: Pod "pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082546864s
Dec 23 02:43:52.808: INFO: Pod "pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088401465s
STEP: Saw pod success
Dec 23 02:43:52.808: INFO: Pod "pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8" satisfied condition "success or failure"
Dec 23 02:43:52.810: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8 container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 02:43:52.870: INFO: Waiting for pod pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8 to disappear
Dec 23 02:43:52.924: INFO: Pod pod-projected-secrets-b36d0b1f-960c-4a61-9742-b2453ec573d8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:43:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2433" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3361,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:43:53.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-6437
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a new StatefulSet
Dec 23 02:43:53.283: INFO: Found 0 stateful pods, waiting for 3
Dec 23 02:44:03.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:03.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:03.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 23 02:44:13.287: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:13.287: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:13.287: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Dec 23 02:44:13.313: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Dec 23 02:44:23.382: INFO: Updating stateful set ss2
Dec 23 02:44:23.416: INFO: Waiting for Pod statefulset-6437/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 23 02:44:33.424: INFO: Waiting for Pod statefulset-6437/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Dec 23 02:44:43.615: INFO: Found 2 stateful pods, waiting for 3
Dec 23 02:44:53.620: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:53.620: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 02:44:53.620: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Dec 23 02:44:53.642: INFO: Updating stateful set ss2
Dec 23 02:44:53.696: INFO: Waiting for Pod statefulset-6437/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 23 02:45:03.811: INFO: Updating stateful set ss2
Dec 23 02:45:03.911: INFO: Waiting for StatefulSet statefulset-6437/ss2 to complete update
Dec 23 02:45:03.911: INFO: Waiting for Pod statefulset-6437/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 23 02:45:13.938: INFO: Waiting for StatefulSet statefulset-6437/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 02:45:23.919: INFO: Deleting all statefulset in ns statefulset-6437
Dec 23 02:45:23.922: INFO: Scaling statefulset ss2 to 0
Dec 23 02:45:43.953: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 02:45:43.955: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:45:43.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6437" for this suite.

• [SLOW TEST:110.960 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":210,"skipped":3373,"failed":0}
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:45:43.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 02:45:44.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8970'
Dec 23 02:45:44.133: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 02:45:44.133: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1496
Dec 23 02:45:46.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8970'
Dec 23 02:45:46.445: INFO: stderr: ""
Dec 23 02:45:46.445: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:45:46.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8970" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":211,"skipped":3373,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:45:46.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Dec 23 02:45:58.692: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:58.692: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:58.727585       6 log.go:172] (0xc00213a6e0) (0xc002b76b40) Create stream
I1223 02:45:58.727630       6 log.go:172] (0xc00213a6e0) (0xc002b76b40) Stream added, broadcasting: 1
I1223 02:45:58.730463       6 log.go:172] (0xc00213a6e0) Reply frame received for 1
I1223 02:45:58.730523       6 log.go:172] (0xc00213a6e0) (0xc001fdd180) Create stream
I1223 02:45:58.730546       6 log.go:172] (0xc00213a6e0) (0xc001fdd180) Stream added, broadcasting: 3
I1223 02:45:58.731625       6 log.go:172] (0xc00213a6e0) Reply frame received for 3
I1223 02:45:58.731670       6 log.go:172] (0xc00213a6e0) (0xc001fdd220) Create stream
I1223 02:45:58.731689       6 log.go:172] (0xc00213a6e0) (0xc001fdd220) Stream added, broadcasting: 5
I1223 02:45:58.732553       6 log.go:172] (0xc00213a6e0) Reply frame received for 5
I1223 02:45:58.835349       6 log.go:172] (0xc00213a6e0) Data frame received for 3
I1223 02:45:58.835394       6 log.go:172] (0xc001fdd180) (3) Data frame handling
I1223 02:45:58.835409       6 log.go:172] (0xc001fdd180) (3) Data frame sent
I1223 02:45:58.836252       6 log.go:172] (0xc00213a6e0) Data frame received for 5
I1223 02:45:58.836290       6 log.go:172] (0xc001fdd220) (5) Data frame handling
I1223 02:45:58.836332       6 log.go:172] (0xc00213a6e0) Data frame received for 3
I1223 02:45:58.836367       6 log.go:172] (0xc001fdd180) (3) Data frame handling
I1223 02:45:58.836607       6 log.go:172] (0xc00213a6e0) Data frame received for 1
I1223 02:45:58.836649       6 log.go:172] (0xc002b76b40) (1) Data frame handling
I1223 02:45:58.836687       6 log.go:172] (0xc002b76b40) (1) Data frame sent
I1223 02:45:58.836790       6 log.go:172] (0xc00213a6e0) (0xc002b76b40) Stream removed, broadcasting: 1
I1223 02:45:58.836814       6 log.go:172] (0xc00213a6e0) Go away received
I1223 02:45:58.837031       6 log.go:172] (0xc00213a6e0) (0xc002b76b40) Stream removed, broadcasting: 1
I1223 02:45:58.837071       6 log.go:172] (0xc00213a6e0) (0xc001fdd180) Stream removed, broadcasting: 3
I1223 02:45:58.837099       6 log.go:172] (0xc00213a6e0) (0xc001fdd220) Stream removed, broadcasting: 5
Dec 23 02:45:58.837: INFO: Exec stderr: ""
Dec 23 02:45:58.837: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:58.837: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:58.863139       6 log.go:172] (0xc002ae3970) (0xc0018000a0) Create stream
I1223 02:45:58.863176       6 log.go:172] (0xc002ae3970) (0xc0018000a0) Stream added, broadcasting: 1
I1223 02:45:58.866096       6 log.go:172] (0xc002ae3970) Reply frame received for 1
I1223 02:45:58.866145       6 log.go:172] (0xc002ae3970) (0xc002a01400) Create stream
I1223 02:45:58.866174       6 log.go:172] (0xc002ae3970) (0xc002a01400) Stream added, broadcasting: 3
I1223 02:45:58.867565       6 log.go:172] (0xc002ae3970) Reply frame received for 3
I1223 02:45:58.867608       6 log.go:172] (0xc002ae3970) (0xc0028ea000) Create stream
I1223 02:45:58.867626       6 log.go:172] (0xc002ae3970) (0xc0028ea000) Stream added, broadcasting: 5
I1223 02:45:58.868953       6 log.go:172] (0xc002ae3970) Reply frame received for 5
I1223 02:45:58.936143       6 log.go:172] (0xc002ae3970) Data frame received for 3
I1223 02:45:58.936179       6 log.go:172] (0xc002a01400) (3) Data frame handling
I1223 02:45:58.936188       6 log.go:172] (0xc002a01400) (3) Data frame sent
I1223 02:45:58.936196       6 log.go:172] (0xc002ae3970) Data frame received for 3
I1223 02:45:58.936203       6 log.go:172] (0xc002a01400) (3) Data frame handling
I1223 02:45:58.936232       6 log.go:172] (0xc002ae3970) Data frame received for 5
I1223 02:45:58.936249       6 log.go:172] (0xc0028ea000) (5) Data frame handling
I1223 02:45:58.937760       6 log.go:172] (0xc002ae3970) Data frame received for 1
I1223 02:45:58.937779       6 log.go:172] (0xc0018000a0) (1) Data frame handling
I1223 02:45:58.937792       6 log.go:172] (0xc0018000a0) (1) Data frame sent
I1223 02:45:58.937812       6 log.go:172] (0xc002ae3970) (0xc0018000a0) Stream removed, broadcasting: 1
I1223 02:45:58.937830       6 log.go:172] (0xc002ae3970) Go away received
I1223 02:45:58.937901       6 log.go:172] (0xc002ae3970) (0xc0018000a0) Stream removed, broadcasting: 1
I1223 02:45:58.937925       6 log.go:172] (0xc002ae3970) (0xc002a01400) Stream removed, broadcasting: 3
I1223 02:45:58.937937       6 log.go:172] (0xc002ae3970) (0xc0028ea000) Stream removed, broadcasting: 5
Dec 23 02:45:58.937: INFO: Exec stderr: ""
Dec 23 02:45:58.937: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:58.938: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:58.965893       6 log.go:172] (0xc00213a210) (0xc0018b8000) Create stream
I1223 02:45:58.965917       6 log.go:172] (0xc00213a210) (0xc0018b8000) Stream added, broadcasting: 1
I1223 02:45:58.968272       6 log.go:172] (0xc00213a210) Reply frame received for 1
I1223 02:45:58.968312       6 log.go:172] (0xc00213a210) (0xc001a703c0) Create stream
I1223 02:45:58.968327       6 log.go:172] (0xc00213a210) (0xc001a703c0) Stream added, broadcasting: 3
I1223 02:45:58.969390       6 log.go:172] (0xc00213a210) Reply frame received for 3
I1223 02:45:58.969412       6 log.go:172] (0xc00213a210) (0xc001a70500) Create stream
I1223 02:45:58.969418       6 log.go:172] (0xc00213a210) (0xc001a70500) Stream added, broadcasting: 5
I1223 02:45:58.970408       6 log.go:172] (0xc00213a210) Reply frame received for 5
I1223 02:45:59.037691       6 log.go:172] (0xc00213a210) Data frame received for 5
I1223 02:45:59.037718       6 log.go:172] (0xc001a70500) (5) Data frame handling
I1223 02:45:59.037748       6 log.go:172] (0xc00213a210) Data frame received for 3
I1223 02:45:59.037755       6 log.go:172] (0xc001a703c0) (3) Data frame handling
I1223 02:45:59.037862       6 log.go:172] (0xc001a703c0) (3) Data frame sent
I1223 02:45:59.037917       6 log.go:172] (0xc00213a210) Data frame received for 3
I1223 02:45:59.037929       6 log.go:172] (0xc001a703c0) (3) Data frame handling
I1223 02:45:59.039148       6 log.go:172] (0xc00213a210) Data frame received for 1
I1223 02:45:59.039176       6 log.go:172] (0xc0018b8000) (1) Data frame handling
I1223 02:45:59.039203       6 log.go:172] (0xc0018b8000) (1) Data frame sent
I1223 02:45:59.039226       6 log.go:172] (0xc00213a210) (0xc0018b8000) Stream removed, broadcasting: 1
I1223 02:45:59.039255       6 log.go:172] (0xc00213a210) Go away received
I1223 02:45:59.039343       6 log.go:172] (0xc00213a210) (0xc0018b8000) Stream removed, broadcasting: 1
I1223 02:45:59.039362       6 log.go:172] (0xc00213a210) (0xc001a703c0) Stream removed, broadcasting: 3
I1223 02:45:59.039370       6 log.go:172] (0xc00213a210) (0xc001a70500) Stream removed, broadcasting: 5
Dec 23 02:45:59.039: INFO: Exec stderr: ""
Dec 23 02:45:59.039: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.039: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.070982       6 log.go:172] (0xc002520c60) (0xc001c04140) Create stream
I1223 02:45:59.071010       6 log.go:172] (0xc002520c60) (0xc001c04140) Stream added, broadcasting: 1
I1223 02:45:59.073149       6 log.go:172] (0xc002520c60) Reply frame received for 1
I1223 02:45:59.073174       6 log.go:172] (0xc002520c60) (0xc0018b86e0) Create stream
I1223 02:45:59.073181       6 log.go:172] (0xc002520c60) (0xc0018b86e0) Stream added, broadcasting: 3
I1223 02:45:59.074158       6 log.go:172] (0xc002520c60) Reply frame received for 3
I1223 02:45:59.074209       6 log.go:172] (0xc002520c60) (0xc000490640) Create stream
I1223 02:45:59.074226       6 log.go:172] (0xc002520c60) (0xc000490640) Stream added, broadcasting: 5
I1223 02:45:59.075254       6 log.go:172] (0xc002520c60) Reply frame received for 5
I1223 02:45:59.144573       6 log.go:172] (0xc002520c60) Data frame received for 5
I1223 02:45:59.144665       6 log.go:172] (0xc000490640) (5) Data frame handling
I1223 02:45:59.144704       6 log.go:172] (0xc002520c60) Data frame received for 3
I1223 02:45:59.144722       6 log.go:172] (0xc0018b86e0) (3) Data frame handling
I1223 02:45:59.144742       6 log.go:172] (0xc0018b86e0) (3) Data frame sent
I1223 02:45:59.144758       6 log.go:172] (0xc002520c60) Data frame received for 3
I1223 02:45:59.144776       6 log.go:172] (0xc0018b86e0) (3) Data frame handling
I1223 02:45:59.146533       6 log.go:172] (0xc002520c60) Data frame received for 1
I1223 02:45:59.146555       6 log.go:172] (0xc001c04140) (1) Data frame handling
I1223 02:45:59.146567       6 log.go:172] (0xc001c04140) (1) Data frame sent
I1223 02:45:59.146583       6 log.go:172] (0xc002520c60) (0xc001c04140) Stream removed, broadcasting: 1
I1223 02:45:59.146596       6 log.go:172] (0xc002520c60) Go away received
I1223 02:45:59.146759       6 log.go:172] (0xc002520c60) (0xc001c04140) Stream removed, broadcasting: 1
I1223 02:45:59.146813       6 log.go:172] (0xc002520c60) (0xc0018b86e0) Stream removed, broadcasting: 3
I1223 02:45:59.146860       6 log.go:172] (0xc002520c60) (0xc000490640) Stream removed, broadcasting: 5
Dec 23 02:45:59.146: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Dec 23 02:45:59.146: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.147: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.179046       6 log.go:172] (0xc00213a630) (0xc0018b8820) Create stream
I1223 02:45:59.179075       6 log.go:172] (0xc00213a630) (0xc0018b8820) Stream added, broadcasting: 1
I1223 02:45:59.181931       6 log.go:172] (0xc00213a630) Reply frame received for 1
I1223 02:45:59.181984       6 log.go:172] (0xc00213a630) (0xc0018b88c0) Create stream
I1223 02:45:59.182024       6 log.go:172] (0xc00213a630) (0xc0018b88c0) Stream added, broadcasting: 3
I1223 02:45:59.183145       6 log.go:172] (0xc00213a630) Reply frame received for 3
I1223 02:45:59.183186       6 log.go:172] (0xc00213a630) (0xc0018b8b40) Create stream
I1223 02:45:59.183201       6 log.go:172] (0xc00213a630) (0xc0018b8b40) Stream added, broadcasting: 5
I1223 02:45:59.184352       6 log.go:172] (0xc00213a630) Reply frame received for 5
I1223 02:45:59.246614       6 log.go:172] (0xc00213a630) Data frame received for 5
I1223 02:45:59.246658       6 log.go:172] (0xc0018b8b40) (5) Data frame handling
I1223 02:45:59.246683       6 log.go:172] (0xc00213a630) Data frame received for 3
I1223 02:45:59.246699       6 log.go:172] (0xc0018b88c0) (3) Data frame handling
I1223 02:45:59.246716       6 log.go:172] (0xc0018b88c0) (3) Data frame sent
I1223 02:45:59.246729       6 log.go:172] (0xc00213a630) Data frame received for 3
I1223 02:45:59.246741       6 log.go:172] (0xc0018b88c0) (3) Data frame handling
I1223 02:45:59.248504       6 log.go:172] (0xc00213a630) Data frame received for 1
I1223 02:45:59.248565       6 log.go:172] (0xc0018b8820) (1) Data frame handling
I1223 02:45:59.248628       6 log.go:172] (0xc0018b8820) (1) Data frame sent
I1223 02:45:59.248680       6 log.go:172] (0xc00213a630) (0xc0018b8820) Stream removed, broadcasting: 1
I1223 02:45:59.248741       6 log.go:172] (0xc00213a630) Go away received
I1223 02:45:59.248988       6 log.go:172] (0xc00213a630) (0xc0018b8820) Stream removed, broadcasting: 1
I1223 02:45:59.249078       6 log.go:172] (0xc00213a630) (0xc0018b88c0) Stream removed, broadcasting: 3
I1223 02:45:59.249107       6 log.go:172] (0xc00213a630) (0xc0018b8b40) Stream removed, broadcasting: 5
Dec 23 02:45:59.249: INFO: Exec stderr: ""
Dec 23 02:45:59.249: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.249: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.281647       6 log.go:172] (0xc001c9c210) (0xc001a70dc0) Create stream
I1223 02:45:59.281680       6 log.go:172] (0xc001c9c210) (0xc001a70dc0) Stream added, broadcasting: 1
I1223 02:45:59.283260       6 log.go:172] (0xc001c9c210) Reply frame received for 1
I1223 02:45:59.283291       6 log.go:172] (0xc001c9c210) (0xc001a710e0) Create stream
I1223 02:45:59.283297       6 log.go:172] (0xc001c9c210) (0xc001a710e0) Stream added, broadcasting: 3
I1223 02:45:59.283931       6 log.go:172] (0xc001c9c210) Reply frame received for 3
I1223 02:45:59.283956       6 log.go:172] (0xc001c9c210) (0xc001a71180) Create stream
I1223 02:45:59.283968       6 log.go:172] (0xc001c9c210) (0xc001a71180) Stream added, broadcasting: 5
I1223 02:45:59.284637       6 log.go:172] (0xc001c9c210) Reply frame received for 5
I1223 02:45:59.365830       6 log.go:172] (0xc001c9c210) Data frame received for 3
I1223 02:45:59.365879       6 log.go:172] (0xc001a710e0) (3) Data frame handling
I1223 02:45:59.365892       6 log.go:172] (0xc001a710e0) (3) Data frame sent
I1223 02:45:59.365900       6 log.go:172] (0xc001c9c210) Data frame received for 3
I1223 02:45:59.365905       6 log.go:172] (0xc001a710e0) (3) Data frame handling
I1223 02:45:59.365983       6 log.go:172] (0xc001c9c210) Data frame received for 5
I1223 02:45:59.366019       6 log.go:172] (0xc001a71180) (5) Data frame handling
I1223 02:45:59.367050       6 log.go:172] (0xc001c9c210) Data frame received for 1
I1223 02:45:59.367063       6 log.go:172] (0xc001a70dc0) (1) Data frame handling
I1223 02:45:59.367071       6 log.go:172] (0xc001a70dc0) (1) Data frame sent
I1223 02:45:59.367089       6 log.go:172] (0xc001c9c210) (0xc001a70dc0) Stream removed, broadcasting: 1
I1223 02:45:59.367162       6 log.go:172] (0xc001c9c210) (0xc001a70dc0) Stream removed, broadcasting: 1
I1223 02:45:59.367183       6 log.go:172] (0xc001c9c210) (0xc001a710e0) Stream removed, broadcasting: 3
I1223 02:45:59.367203       6 log.go:172] (0xc001c9c210) (0xc001a71180) Stream removed, broadcasting: 5
Dec 23 02:45:59.367: INFO: Exec stderr: ""
I1223 02:45:59.367223       6 log.go:172] (0xc001c9c210) Go away received
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Dec 23 02:45:59.367: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.367: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.391410       6 log.go:172] (0xc0025211e0) (0xc001c045a0) Create stream
I1223 02:45:59.391441       6 log.go:172] (0xc0025211e0) (0xc001c045a0) Stream added, broadcasting: 1
I1223 02:45:59.393810       6 log.go:172] (0xc0025211e0) Reply frame received for 1
I1223 02:45:59.393847       6 log.go:172] (0xc0025211e0) (0xc001c04640) Create stream
I1223 02:45:59.393856       6 log.go:172] (0xc0025211e0) (0xc001c04640) Stream added, broadcasting: 3
I1223 02:45:59.394733       6 log.go:172] (0xc0025211e0) Reply frame received for 3
I1223 02:45:59.394784       6 log.go:172] (0xc0025211e0) (0xc001c04780) Create stream
I1223 02:45:59.394809       6 log.go:172] (0xc0025211e0) (0xc001c04780) Stream added, broadcasting: 5
I1223 02:45:59.395765       6 log.go:172] (0xc0025211e0) Reply frame received for 5
I1223 02:45:59.458361       6 log.go:172] (0xc0025211e0) Data frame received for 3
I1223 02:45:59.458396       6 log.go:172] (0xc001c04640) (3) Data frame handling
I1223 02:45:59.458409       6 log.go:172] (0xc001c04640) (3) Data frame sent
I1223 02:45:59.458418       6 log.go:172] (0xc0025211e0) Data frame received for 3
I1223 02:45:59.458424       6 log.go:172] (0xc001c04640) (3) Data frame handling
I1223 02:45:59.458471       6 log.go:172] (0xc0025211e0) Data frame received for 5
I1223 02:45:59.458488       6 log.go:172] (0xc001c04780) (5) Data frame handling
I1223 02:45:59.459634       6 log.go:172] (0xc0025211e0) Data frame received for 1
I1223 02:45:59.459656       6 log.go:172] (0xc001c045a0) (1) Data frame handling
I1223 02:45:59.459667       6 log.go:172] (0xc001c045a0) (1) Data frame sent
I1223 02:45:59.459684       6 log.go:172] (0xc0025211e0) (0xc001c045a0) Stream removed, broadcasting: 1
I1223 02:45:59.459715       6 log.go:172] (0xc0025211e0) Go away received
I1223 02:45:59.459779       6 log.go:172] (0xc0025211e0) (0xc001c045a0) Stream removed, broadcasting: 1
I1223 02:45:59.459793       6 log.go:172] (0xc0025211e0) (0xc001c04640) Stream removed, broadcasting: 3
I1223 02:45:59.459803       6 log.go:172] (0xc0025211e0) (0xc001c04780) Stream removed, broadcasting: 5
Dec 23 02:45:59.459: INFO: Exec stderr: ""
Dec 23 02:45:59.459: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.459: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.488220       6 log.go:172] (0xc001c9c4d0) (0xc001a717c0) Create stream
I1223 02:45:59.488255       6 log.go:172] (0xc001c9c4d0) (0xc001a717c0) Stream added, broadcasting: 1
I1223 02:45:59.491401       6 log.go:172] (0xc001c9c4d0) Reply frame received for 1
I1223 02:45:59.491455       6 log.go:172] (0xc001c9c4d0) (0xc001a71900) Create stream
I1223 02:45:59.491468       6 log.go:172] (0xc001c9c4d0) (0xc001a71900) Stream added, broadcasting: 3
I1223 02:45:59.492535       6 log.go:172] (0xc001c9c4d0) Reply frame received for 3
I1223 02:45:59.492574       6 log.go:172] (0xc001c9c4d0) (0xc001c048c0) Create stream
I1223 02:45:59.492600       6 log.go:172] (0xc001c9c4d0) (0xc001c048c0) Stream added, broadcasting: 5
I1223 02:45:59.493638       6 log.go:172] (0xc001c9c4d0) Reply frame received for 5
I1223 02:45:59.585227       6 log.go:172] (0xc001c9c4d0) Data frame received for 3
I1223 02:45:59.585275       6 log.go:172] (0xc001a71900) (3) Data frame handling
I1223 02:45:59.585318       6 log.go:172] (0xc001a71900) (3) Data frame sent
I1223 02:45:59.585355       6 log.go:172] (0xc001c9c4d0) Data frame received for 3
I1223 02:45:59.585393       6 log.go:172] (0xc001a71900) (3) Data frame handling
I1223 02:45:59.585428       6 log.go:172] (0xc001c9c4d0) Data frame received for 5
I1223 02:45:59.585468       6 log.go:172] (0xc001c048c0) (5) Data frame handling
I1223 02:45:59.586919       6 log.go:172] (0xc001c9c4d0) Data frame received for 1
I1223 02:45:59.586946       6 log.go:172] (0xc001a717c0) (1) Data frame handling
I1223 02:45:59.586962       6 log.go:172] (0xc001a717c0) (1) Data frame sent
I1223 02:45:59.586982       6 log.go:172] (0xc001c9c4d0) (0xc001a717c0) Stream removed, broadcasting: 1
I1223 02:45:59.587006       6 log.go:172] (0xc001c9c4d0) Go away received
I1223 02:45:59.587055       6 log.go:172] (0xc001c9c4d0) (0xc001a717c0) Stream removed, broadcasting: 1
I1223 02:45:59.587076       6 log.go:172] (0xc001c9c4d0) (0xc001a71900) Stream removed, broadcasting: 3
I1223 02:45:59.587083       6 log.go:172] (0xc001c9c4d0) (0xc001c048c0) Stream removed, broadcasting: 5
Dec 23 02:45:59.587: INFO: Exec stderr: ""
Dec 23 02:45:59.587: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.587: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.615729       6 log.go:172] (0xc00248d130) (0xc000490b40) Create stream
I1223 02:45:59.615755       6 log.go:172] (0xc00248d130) (0xc000490b40) Stream added, broadcasting: 1
I1223 02:45:59.625584       6 log.go:172] (0xc00248d130) Reply frame received for 1
I1223 02:45:59.625637       6 log.go:172] (0xc00248d130) (0xc001a71cc0) Create stream
I1223 02:45:59.625651       6 log.go:172] (0xc00248d130) (0xc001a71cc0) Stream added, broadcasting: 3
I1223 02:45:59.626500       6 log.go:172] (0xc00248d130) Reply frame received for 3
I1223 02:45:59.626533       6 log.go:172] (0xc00248d130) (0xc000490e60) Create stream
I1223 02:45:59.626546       6 log.go:172] (0xc00248d130) (0xc000490e60) Stream added, broadcasting: 5
I1223 02:45:59.627585       6 log.go:172] (0xc00248d130) Reply frame received for 5
I1223 02:45:59.692288       6 log.go:172] (0xc00248d130) Data frame received for 3
I1223 02:45:59.692320       6 log.go:172] (0xc001a71cc0) (3) Data frame handling
I1223 02:45:59.692334       6 log.go:172] (0xc001a71cc0) (3) Data frame sent
I1223 02:45:59.692344       6 log.go:172] (0xc00248d130) Data frame received for 3
I1223 02:45:59.692360       6 log.go:172] (0xc001a71cc0) (3) Data frame handling
I1223 02:45:59.692410       6 log.go:172] (0xc00248d130) Data frame received for 5
I1223 02:45:59.692418       6 log.go:172] (0xc000490e60) (5) Data frame handling
I1223 02:45:59.693979       6 log.go:172] (0xc00248d130) Data frame received for 1
I1223 02:45:59.693998       6 log.go:172] (0xc000490b40) (1) Data frame handling
I1223 02:45:59.694007       6 log.go:172] (0xc000490b40) (1) Data frame sent
I1223 02:45:59.694026       6 log.go:172] (0xc00248d130) (0xc000490b40) Stream removed, broadcasting: 1
I1223 02:45:59.694165       6 log.go:172] (0xc00248d130) (0xc000490b40) Stream removed, broadcasting: 1
I1223 02:45:59.694202       6 log.go:172] (0xc00248d130) (0xc001a71cc0) Stream removed, broadcasting: 3
I1223 02:45:59.694218       6 log.go:172] (0xc00248d130) (0xc000490e60) Stream removed, broadcasting: 5
Dec 23 02:45:59.694: INFO: Exec stderr: ""
I1223 02:45:59.694281       6 log.go:172] (0xc00248d130) Go away received
Dec 23 02:45:59.694: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4234 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:45:59.694: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:45:59.726868       6 log.go:172] (0xc003f6c580) (0xc0018843c0) Create stream
I1223 02:45:59.726906       6 log.go:172] (0xc003f6c580) (0xc0018843c0) Stream added, broadcasting: 1
I1223 02:45:59.730426       6 log.go:172] (0xc003f6c580) Reply frame received for 1
I1223 02:45:59.730468       6 log.go:172] (0xc003f6c580) (0xc001c04b40) Create stream
I1223 02:45:59.730482       6 log.go:172] (0xc003f6c580) (0xc001c04b40) Stream added, broadcasting: 3
I1223 02:45:59.731459       6 log.go:172] (0xc003f6c580) Reply frame received for 3
I1223 02:45:59.731493       6 log.go:172] (0xc003f6c580) (0xc002a006e0) Create stream
I1223 02:45:59.731504       6 log.go:172] (0xc003f6c580) (0xc002a006e0) Stream added, broadcasting: 5
I1223 02:45:59.732508       6 log.go:172] (0xc003f6c580) Reply frame received for 5
I1223 02:45:59.806733       6 log.go:172] (0xc003f6c580) Data frame received for 3
I1223 02:45:59.806768       6 log.go:172] (0xc001c04b40) (3) Data frame handling
I1223 02:45:59.806803       6 log.go:172] (0xc003f6c580) Data frame received for 5
I1223 02:45:59.806849       6 log.go:172] (0xc002a006e0) (5) Data frame handling
I1223 02:45:59.806893       6 log.go:172] (0xc001c04b40) (3) Data frame sent
I1223 02:45:59.806915       6 log.go:172] (0xc003f6c580) Data frame received for 3
I1223 02:45:59.806932       6 log.go:172] (0xc001c04b40) (3) Data frame handling
I1223 02:45:59.808530       6 log.go:172] (0xc003f6c580) Data frame received for 1
I1223 02:45:59.808552       6 log.go:172] (0xc0018843c0) (1) Data frame handling
I1223 02:45:59.808561       6 log.go:172] (0xc0018843c0) (1) Data frame sent
I1223 02:45:59.808570       6 log.go:172] (0xc003f6c580) (0xc0018843c0) Stream removed, broadcasting: 1
I1223 02:45:59.808581       6 log.go:172] (0xc003f6c580) Go away received
I1223 02:45:59.808690       6 log.go:172] (0xc003f6c580) (0xc0018843c0) Stream removed, broadcasting: 1
I1223 02:45:59.808723       6 log.go:172] (0xc003f6c580) (0xc001c04b40) Stream removed, broadcasting: 3
I1223 02:45:59.808736       6 log.go:172] (0xc003f6c580) (0xc002a006e0) Stream removed, broadcasting: 5
Dec 23 02:45:59.808: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:45:59.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4234" for this suite.

• [SLOW TEST:13.300 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3390,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:45:59.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:46:00.627: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:46:02.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288360, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288360, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288360, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288360, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:46:05.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Dec 23 02:46:09.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3841 to-be-attached-pod -i -c=container1'
Dec 23 02:46:09.858: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:46:09.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3841" for this suite.
STEP: Destroying namespace "webhook-3841-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.182 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":213,"skipped":3391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:46:10.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 23 02:46:10.065: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 02:46:10.082: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 02:46:10.085: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Dec 23 02:46:10.103: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.103: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 02:46:10.103: INFO: test-pod from e2e-kubelet-etc-hosts-4234 started at 2020-12-23 02:45:46 +0000 UTC (3 container statuses recorded)
Dec 23 02:46:10.103: INFO: 	Container busybox-1 ready: true, restart count 0
Dec 23 02:46:10.103: INFO: 	Container busybox-2 ready: true, restart count 0
Dec 23 02:46:10.103: INFO: 	Container busybox-3 ready: true, restart count 0
Dec 23 02:46:10.103: INFO: chaos-controller-manager-7f9bbd476f-jm8nf from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.103: INFO: 	Container chaos-mesh ready: true, restart count 0
Dec 23 02:46:10.103: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.104: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 02:46:10.104: INFO: chaos-daemon-r2kj7 from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.104: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 02:46:10.104: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Dec 23 02:46:10.123: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 02:46:10.123: INFO: test-host-network-pod from e2e-kubelet-etc-hosts-4234 started at 2020-12-23 02:45:54 +0000 UTC (2 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container busybox-1 ready: true, restart count 0
Dec 23 02:46:10.123: INFO: 	Container busybox-2 ready: true, restart count 0
Dec 23 02:46:10.123: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 02:46:10.123: INFO: sample-webhook-deployment-5f65f8c764-ltxfm from webhook-3841 started at 2020-12-23 02:46:00 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container sample-webhook ready: true, restart count 0
Dec 23 02:46:10.123: INFO: to-be-attached-pod from webhook-3841 started at 2020-12-23 02:46:05 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container container1 ready: true, restart count 0
Dec 23 02:46:10.123: INFO: chaos-daemon-mzgg5 from default started at 2020-11-22 21:56:28 +0000 UTC (1 container statuses recorded)
Dec 23 02:46:10.123: INFO: 	Container chaos-daemon ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8f7e0d53-7266-4cdf-8b0a-30c97473f336 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-8f7e0d53-7266-4cdf-8b0a-30c97473f336 off the node jerma-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8f7e0d53-7266-4cdf-8b0a-30c97473f336
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:46:26.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1564" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:16.385 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":214,"skipped":3431,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:46:26.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-71e7c0f9-c06b-4a43-98b0-259e75db65ea
STEP: Creating a pod to test consume configMaps
Dec 23 02:46:26.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916" in namespace "projected-377" to be "success or failure"
Dec 23 02:46:26.582: INFO: Pod "pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916": Phase="Pending", Reason="", readiness=false. Elapsed: 70.354064ms
Dec 23 02:46:28.585: INFO: Pod "pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073681017s
Dec 23 02:46:30.589: INFO: Pod "pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07761856s
STEP: Saw pod success
Dec 23 02:46:30.589: INFO: Pod "pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916" satisfied condition "success or failure"
Dec 23 02:46:30.592: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:46:30.631: INFO: Waiting for pod pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916 to disappear
Dec 23 02:46:30.654: INFO: Pod pod-projected-configmaps-98c2d7b5-142d-4d4f-9d80-f202db732916 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:46:30.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-377" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3437,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:46:30.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-209ce108-934b-45d8-a92f-a04d0a603cdb
STEP: Creating a pod to test consume secrets
Dec 23 02:46:30.748: INFO: Waiting up to 5m0s for pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682" in namespace "secrets-2574" to be "success or failure"
Dec 23 02:46:30.781: INFO: Pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682": Phase="Pending", Reason="", readiness=false. Elapsed: 32.713032ms
Dec 23 02:46:32.783: INFO: Pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035290299s
Dec 23 02:46:34.787: INFO: Pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682": Phase="Running", Reason="", readiness=true. Elapsed: 4.038750376s
Dec 23 02:46:36.790: INFO: Pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042320862s
STEP: Saw pod success
Dec 23 02:46:36.790: INFO: Pod "pod-secrets-402403cd-78bc-4743-a017-566542be4682" satisfied condition "success or failure"
Dec 23 02:46:36.793: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-402403cd-78bc-4743-a017-566542be4682 container secret-volume-test: 
STEP: delete the pod
Dec 23 02:46:36.821: INFO: Waiting for pod pod-secrets-402403cd-78bc-4743-a017-566542be4682 to disappear
Dec 23 02:46:36.997: INFO: Pod pod-secrets-402403cd-78bc-4743-a017-566542be4682 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:46:36.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2574" for this suite.

• [SLOW TEST:6.344 seconds]
[sig-storage] Secrets
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3455,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:46:37.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:08.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5767" for this suite.
STEP: Destroying namespace "nsdeletetest-7102" for this suite.
Dec 23 02:47:08.681: INFO: Namespace nsdeletetest-7102 was already deleted
STEP: Destroying namespace "nsdeletetest-6520" for this suite.

• [SLOW TEST:31.679 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":217,"skipped":3461,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:08.684: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap configmap-7280/configmap-test-2dcf4ccd-deab-4ad3-94fd-941f627e5a79
STEP: Creating a pod to test consume configMaps
Dec 23 02:47:08.750: INFO: Waiting up to 5m0s for pod "pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3" in namespace "configmap-7280" to be "success or failure"
Dec 23 02:47:08.754: INFO: Pod "pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938802ms
Dec 23 02:47:10.759: INFO: Pod "pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00813182s
Dec 23 02:47:12.763: INFO: Pod "pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012664492s
STEP: Saw pod success
Dec 23 02:47:12.763: INFO: Pod "pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3" satisfied condition "success or failure"
Dec 23 02:47:12.767: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3 container env-test: 
STEP: delete the pod
Dec 23 02:47:12.786: INFO: Waiting for pod pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3 to disappear
Dec 23 02:47:12.807: INFO: Pod pod-configmaps-1616e229-4c3c-49d8-9fb8-17749a1af6f3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:12.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7280" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3464,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:12.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:24.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9454" for this suite.

• [SLOW TEST:11.249 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":219,"skipped":3473,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:24.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service nodeport-test with type=NodePort in namespace services-4985
STEP: creating replication controller nodeport-test in namespace services-4985
I1223 02:47:24.186498       6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-4985, replica count: 2
I1223 02:47:27.236996       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 02:47:30.237253       6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 23 02:47:30.237: INFO: Creating new exec pod
Dec 23 02:47:35.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4985 execpodhjcsc -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Dec 23 02:47:35.532: INFO: stderr: "I1223 02:47:35.435066    2750 log.go:172] (0xc0009082c0) (0xc00092e000) Create stream\nI1223 02:47:35.435128    2750 log.go:172] (0xc0009082c0) (0xc00092e000) Stream added, broadcasting: 1\nI1223 02:47:35.439165    2750 log.go:172] (0xc0009082c0) Reply frame received for 1\nI1223 02:47:35.439532    2750 log.go:172] (0xc0009082c0) (0xc0008cc000) Create stream\nI1223 02:47:35.439681    2750 log.go:172] (0xc0009082c0) (0xc0008cc000) Stream added, broadcasting: 3\nI1223 02:47:35.441431    2750 log.go:172] (0xc0009082c0) Reply frame received for 3\nI1223 02:47:35.441492    2750 log.go:172] (0xc0009082c0) (0xc0002adf40) Create stream\nI1223 02:47:35.441518    2750 log.go:172] (0xc0009082c0) (0xc0002adf40) Stream added, broadcasting: 5\nI1223 02:47:35.442383    2750 log.go:172] (0xc0009082c0) Reply frame received for 5\nI1223 02:47:35.521044    2750 log.go:172] (0xc0009082c0) Data frame received for 5\nI1223 02:47:35.521139    2750 log.go:172] (0xc0002adf40) (5) Data frame handling\nI1223 02:47:35.521166    2750 log.go:172] (0xc0002adf40) (5) Data frame sent\nI1223 02:47:35.521182    2750 log.go:172] (0xc0009082c0) Data frame received for 5\nI1223 02:47:35.521194    2750 log.go:172] (0xc0002adf40) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1223 02:47:35.521274    2750 log.go:172] (0xc0002adf40) (5) Data frame sent\nI1223 02:47:35.521540    2750 log.go:172] (0xc0009082c0) Data frame received for 5\nI1223 02:47:35.521560    2750 log.go:172] (0xc0002adf40) (5) Data frame handling\nI1223 02:47:35.521579    2750 log.go:172] (0xc0009082c0) Data frame received for 3\nI1223 02:47:35.521590    2750 log.go:172] (0xc0008cc000) (3) Data frame handling\nI1223 02:47:35.523639    2750 log.go:172] (0xc0009082c0) Data frame received for 1\nI1223 02:47:35.523677    2750 log.go:172] (0xc00092e000) (1) Data frame handling\nI1223 02:47:35.523700    2750 log.go:172] (0xc00092e000) (1) Data frame sent\nI1223 02:47:35.523721    2750 log.go:172] (0xc0009082c0) (0xc00092e000) Stream removed, broadcasting: 1\nI1223 02:47:35.523872    2750 log.go:172] (0xc0009082c0) Go away received\nI1223 02:47:35.524062    2750 log.go:172] (0xc0009082c0) (0xc00092e000) Stream removed, broadcasting: 1\nI1223 02:47:35.524086    2750 log.go:172] (0xc0009082c0) (0xc0008cc000) Stream removed, broadcasting: 3\nI1223 02:47:35.524098    2750 log.go:172] (0xc0009082c0) (0xc0002adf40) Stream removed, broadcasting: 5\n"
Dec 23 02:47:35.533: INFO: stdout: ""
Dec 23 02:47:35.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4985 execpodhjcsc -- /bin/sh -x -c nc -zv -t -w 2 10.100.162.44 80'
Dec 23 02:47:35.746: INFO: stderr: "I1223 02:47:35.673885    2770 log.go:172] (0xc000938000) (0xc0002ad540) Create stream\nI1223 02:47:35.673934    2770 log.go:172] (0xc000938000) (0xc0002ad540) Stream added, broadcasting: 1\nI1223 02:47:35.676705    2770 log.go:172] (0xc000938000) Reply frame received for 1\nI1223 02:47:35.676761    2770 log.go:172] (0xc000938000) (0xc0002ad5e0) Create stream\nI1223 02:47:35.676776    2770 log.go:172] (0xc000938000) (0xc0002ad5e0) Stream added, broadcasting: 3\nI1223 02:47:35.678140    2770 log.go:172] (0xc000938000) Reply frame received for 3\nI1223 02:47:35.678197    2770 log.go:172] (0xc000938000) (0xc00096a000) Create stream\nI1223 02:47:35.678218    2770 log.go:172] (0xc000938000) (0xc00096a000) Stream added, broadcasting: 5\nI1223 02:47:35.679274    2770 log.go:172] (0xc000938000) Reply frame received for 5\nI1223 02:47:35.736975    2770 log.go:172] (0xc000938000) Data frame received for 3\nI1223 02:47:35.737007    2770 log.go:172] (0xc0002ad5e0) (3) Data frame handling\nI1223 02:47:35.737027    2770 log.go:172] (0xc000938000) Data frame received for 5\nI1223 02:47:35.737033    2770 log.go:172] (0xc00096a000) (5) Data frame handling\nI1223 02:47:35.737041    2770 log.go:172] (0xc00096a000) (5) Data frame sent\nI1223 02:47:35.737046    2770 log.go:172] (0xc000938000) Data frame received for 5\nI1223 02:47:35.737050    2770 log.go:172] (0xc00096a000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.162.44 80\nConnection to 10.100.162.44 80 port [tcp/http] succeeded!\nI1223 02:47:35.739040    2770 log.go:172] (0xc000938000) Data frame received for 1\nI1223 02:47:35.739076    2770 log.go:172] (0xc0002ad540) (1) Data frame handling\nI1223 02:47:35.739093    2770 log.go:172] (0xc0002ad540) (1) Data frame sent\nI1223 02:47:35.739114    2770 log.go:172] (0xc000938000) (0xc0002ad540) Stream removed, broadcasting: 1\nI1223 02:47:35.739130    2770 log.go:172] (0xc000938000) Go away received\nI1223 02:47:35.739531    2770 log.go:172] (0xc000938000) (0xc0002ad540) Stream removed, broadcasting: 1\nI1223 02:47:35.739554    2770 log.go:172] (0xc000938000) (0xc0002ad5e0) Stream removed, broadcasting: 3\nI1223 02:47:35.739566    2770 log.go:172] (0xc000938000) (0xc00096a000) Stream removed, broadcasting: 5\n"
Dec 23 02:47:35.746: INFO: stdout: ""
Dec 23 02:47:35.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4985 execpodhjcsc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 32081'
Dec 23 02:47:35.942: INFO: stderr: "I1223 02:47:35.867013    2792 log.go:172] (0xc0009cf4a0) (0xc0009aa8c0) Create stream\nI1223 02:47:35.867093    2792 log.go:172] (0xc0009cf4a0) (0xc0009aa8c0) Stream added, broadcasting: 1\nI1223 02:47:35.870743    2792 log.go:172] (0xc0009cf4a0) Reply frame received for 1\nI1223 02:47:35.870780    2792 log.go:172] (0xc0009cf4a0) (0xc0006ae5a0) Create stream\nI1223 02:47:35.870798    2792 log.go:172] (0xc0009cf4a0) (0xc0006ae5a0) Stream added, broadcasting: 3\nI1223 02:47:35.871703    2792 log.go:172] (0xc0009cf4a0) Reply frame received for 3\nI1223 02:47:35.871719    2792 log.go:172] (0xc0009cf4a0) (0xc000513360) Create stream\nI1223 02:47:35.871724    2792 log.go:172] (0xc0009cf4a0) (0xc000513360) Stream added, broadcasting: 5\nI1223 02:47:35.872591    2792 log.go:172] (0xc0009cf4a0) Reply frame received for 5\nI1223 02:47:35.933623    2792 log.go:172] (0xc0009cf4a0) Data frame received for 5\nI1223 02:47:35.933674    2792 log.go:172] (0xc000513360) (5) Data frame handling\nI1223 02:47:35.933703    2792 log.go:172] (0xc000513360) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.9 32081\nConnection to 172.18.0.9 32081 port [tcp/32081] succeeded!\nI1223 02:47:35.934056    2792 log.go:172] (0xc0009cf4a0) Data frame received for 3\nI1223 02:47:35.934096    2792 log.go:172] (0xc0006ae5a0) (3) Data frame handling\nI1223 02:47:35.934131    2792 log.go:172] (0xc0009cf4a0) Data frame received for 5\nI1223 02:47:35.934168    2792 log.go:172] (0xc000513360) (5) Data frame handling\nI1223 02:47:35.935732    2792 log.go:172] (0xc0009cf4a0) Data frame received for 1\nI1223 02:47:35.935748    2792 log.go:172] (0xc0009aa8c0) (1) Data frame handling\nI1223 02:47:35.935759    2792 log.go:172] (0xc0009aa8c0) (1) Data frame sent\nI1223 02:47:35.935864    2792 log.go:172] (0xc0009cf4a0) (0xc0009aa8c0) Stream removed, broadcasting: 1\nI1223 02:47:35.936077    2792 log.go:172] (0xc0009cf4a0) Go away received\nI1223 02:47:35.936254    2792 log.go:172] (0xc0009cf4a0) (0xc0009aa8c0) Stream removed, broadcasting: 1\nI1223 02:47:35.936273    2792 log.go:172] (0xc0009cf4a0) (0xc0006ae5a0) Stream removed, broadcasting: 3\nI1223 02:47:35.936287    2792 log.go:172] (0xc0009cf4a0) (0xc000513360) Stream removed, broadcasting: 5\n"
Dec 23 02:47:35.942: INFO: stdout: ""
Dec 23 02:47:35.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4985 execpodhjcsc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 32081'
Dec 23 02:47:36.154: INFO: stderr: "I1223 02:47:36.080412    2812 log.go:172] (0xc00052a210) (0xc000711e00) Create stream\nI1223 02:47:36.080482    2812 log.go:172] (0xc00052a210) (0xc000711e00) Stream added, broadcasting: 1\nI1223 02:47:36.084062    2812 log.go:172] (0xc00052a210) Reply frame received for 1\nI1223 02:47:36.084119    2812 log.go:172] (0xc00052a210) (0xc000489540) Create stream\nI1223 02:47:36.084128    2812 log.go:172] (0xc00052a210) (0xc000489540) Stream added, broadcasting: 3\nI1223 02:47:36.085296    2812 log.go:172] (0xc00052a210) Reply frame received for 3\nI1223 02:47:36.085343    2812 log.go:172] (0xc00052a210) (0xc000711ea0) Create stream\nI1223 02:47:36.085359    2812 log.go:172] (0xc00052a210) (0xc000711ea0) Stream added, broadcasting: 5\nI1223 02:47:36.086288    2812 log.go:172] (0xc00052a210) Reply frame received for 5\nI1223 02:47:36.146509    2812 log.go:172] (0xc00052a210) Data frame received for 3\nI1223 02:47:36.146537    2812 log.go:172] (0xc000489540) (3) Data frame handling\nI1223 02:47:36.146715    2812 log.go:172] (0xc00052a210) Data frame received for 5\nI1223 02:47:36.146736    2812 log.go:172] (0xc000711ea0) (5) Data frame handling\nI1223 02:47:36.146763    2812 log.go:172] (0xc000711ea0) (5) Data frame sent\nI1223 02:47:36.146777    2812 log.go:172] (0xc00052a210) Data frame received for 5\nI1223 02:47:36.146788    2812 log.go:172] (0xc000711ea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 32081\nConnection to 172.18.0.10 32081 port [tcp/32081] succeeded!\nI1223 02:47:36.148211    2812 log.go:172] (0xc00052a210) Data frame received for 1\nI1223 02:47:36.148236    2812 log.go:172] (0xc000711e00) (1) Data frame handling\nI1223 02:47:36.148243    2812 log.go:172] (0xc000711e00) (1) Data frame sent\nI1223 02:47:36.148252    2812 log.go:172] (0xc00052a210) (0xc000711e00) Stream removed, broadcasting: 1\nI1223 02:47:36.148300    2812 log.go:172] (0xc00052a210) Go away received\nI1223 02:47:36.148500    2812 log.go:172] (0xc00052a210) (0xc000711e00) Stream removed, broadcasting: 1\nI1223 02:47:36.148510    2812 log.go:172] (0xc00052a210) (0xc000489540) Stream removed, broadcasting: 3\nI1223 02:47:36.148516    2812 log.go:172] (0xc00052a210) (0xc000711ea0) Stream removed, broadcasting: 5\n"
Dec 23 02:47:36.154: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:36.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4985" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:12.097 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":220,"skipped":3504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:36.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Dec 23 02:47:36.201: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:41.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8600" for this suite.

• [SLOW TEST:5.844 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":221,"skipped":3537,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:42.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
[It] should create an rc from an image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 02:47:42.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8184'
Dec 23 02:47:42.468: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 02:47:42.468: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Dec 23 02:47:42.602: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-pj9r5]
Dec 23 02:47:42.602: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-pj9r5" in namespace "kubectl-8184" to be "running and ready"
Dec 23 02:47:42.605: INFO: Pod "e2e-test-httpd-rc-pj9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.052793ms
Dec 23 02:47:44.632: INFO: Pod "e2e-test-httpd-rc-pj9r5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029523687s
Dec 23 02:47:46.636: INFO: Pod "e2e-test-httpd-rc-pj9r5": Phase="Running", Reason="", readiness=true. Elapsed: 4.033987373s
Dec 23 02:47:46.636: INFO: Pod "e2e-test-httpd-rc-pj9r5" satisfied condition "running and ready"
Dec 23 02:47:46.636: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-pj9r5]
Dec 23 02:47:46.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8184'
Dec 23 02:47:46.769: INFO: stderr: ""
Dec 23 02:47:46.769: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.137. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.137. Set the 'ServerName' directive globally to suppress this message\n[Wed Dec 23 02:47:45.528992 2020] [mpm_event:notice] [pid 1:tid 140414582537064] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Dec 23 02:47:45.529036 2020] [core:notice] [pid 1:tid 140414582537064] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1531
Dec 23 02:47:46.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8184'
Dec 23 02:47:46.877: INFO: stderr: ""
Dec 23 02:47:46.877: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:47:46.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8184" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Deprecated] [Conformance]","total":278,"completed":222,"skipped":3540,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:47:46.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod test-webserver-f190e171-62f5-4ef3-9096-cb6787622954 in namespace container-probe-3189
Dec 23 02:47:50.997: INFO: Started pod test-webserver-f190e171-62f5-4ef3-9096-cb6787622954 in namespace container-probe-3189
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 02:47:51.000: INFO: Initial restart count of pod test-webserver-f190e171-62f5-4ef3-9096-cb6787622954 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:51:51.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3189" for this suite.

• [SLOW TEST:244.804 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:51:51.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Performing setup for networking test in namespace pod-network-test-997
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 23 02:51:51.803: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 23 02:52:12.270: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.138:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-997 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:52:12.270: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:52:12.302559       6 log.go:172] (0xc003f6c160) (0xc0024c2d20) Create stream
I1223 02:52:12.302586       6 log.go:172] (0xc003f6c160) (0xc0024c2d20) Stream added, broadcasting: 1
I1223 02:52:12.304145       6 log.go:172] (0xc003f6c160) Reply frame received for 1
I1223 02:52:12.304170       6 log.go:172] (0xc003f6c160) (0xc001800640) Create stream
I1223 02:52:12.304179       6 log.go:172] (0xc003f6c160) (0xc001800640) Stream added, broadcasting: 3
I1223 02:52:12.304944       6 log.go:172] (0xc003f6c160) Reply frame received for 3
I1223 02:52:12.304962       6 log.go:172] (0xc003f6c160) (0xc002b76320) Create stream
I1223 02:52:12.304969       6 log.go:172] (0xc003f6c160) (0xc002b76320) Stream added, broadcasting: 5
I1223 02:52:12.305690       6 log.go:172] (0xc003f6c160) Reply frame received for 5
I1223 02:52:12.393702       6 log.go:172] (0xc003f6c160) Data frame received for 3
I1223 02:52:12.393743       6 log.go:172] (0xc001800640) (3) Data frame handling
I1223 02:52:12.393774       6 log.go:172] (0xc001800640) (3) Data frame sent
I1223 02:52:12.393791       6 log.go:172] (0xc003f6c160) Data frame received for 3
I1223 02:52:12.393807       6 log.go:172] (0xc001800640) (3) Data frame handling
I1223 02:52:12.393975       6 log.go:172] (0xc003f6c160) Data frame received for 5
I1223 02:52:12.394006       6 log.go:172] (0xc002b76320) (5) Data frame handling
I1223 02:52:12.396229       6 log.go:172] (0xc003f6c160) Data frame received for 1
I1223 02:52:12.396254       6 log.go:172] (0xc0024c2d20) (1) Data frame handling
I1223 02:52:12.396267       6 log.go:172] (0xc0024c2d20) (1) Data frame sent
I1223 02:52:12.396374       6 log.go:172] (0xc003f6c160) (0xc0024c2d20) Stream removed, broadcasting: 1
I1223 02:52:12.396502       6 log.go:172] (0xc003f6c160) Go away received
I1223 02:52:12.396525       6 log.go:172] (0xc003f6c160) (0xc0024c2d20) Stream removed, broadcasting: 1
I1223 02:52:12.396557       6 log.go:172] (0xc003f6c160) (0xc001800640) Stream removed, broadcasting: 3
I1223 02:52:12.396582       6 log.go:172] (0xc003f6c160) (0xc002b76320) Stream removed, broadcasting: 5
Dec 23 02:52:12.396: INFO: Found all expected endpoints: [netserver-0]
Dec 23 02:52:12.399: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-997 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 23 02:52:12.400: INFO: >>> kubeConfig: /root/.kube/config
I1223 02:52:12.431922       6 log.go:172] (0xc0025740b0) (0xc001800c80) Create stream
I1223 02:52:12.431958       6 log.go:172] (0xc0025740b0) (0xc001800c80) Stream added, broadcasting: 1
I1223 02:52:12.434212       6 log.go:172] (0xc0025740b0) Reply frame received for 1
I1223 02:52:12.434283       6 log.go:172] (0xc0025740b0) (0xc001800fa0) Create stream
I1223 02:52:12.434308       6 log.go:172] (0xc0025740b0) (0xc001800fa0) Stream added, broadcasting: 3
I1223 02:52:12.435361       6 log.go:172] (0xc0025740b0) Reply frame received for 3
I1223 02:52:12.435407       6 log.go:172] (0xc0025740b0) (0xc0028ea780) Create stream
I1223 02:52:12.435424       6 log.go:172] (0xc0025740b0) (0xc0028ea780) Stream added, broadcasting: 5
I1223 02:52:12.436492       6 log.go:172] (0xc0025740b0) Reply frame received for 5
I1223 02:52:12.504657       6 log.go:172] (0xc0025740b0) Data frame received for 3
I1223 02:52:12.504691       6 log.go:172] (0xc001800fa0) (3) Data frame handling
I1223 02:52:12.504714       6 log.go:172] (0xc001800fa0) (3) Data frame sent
I1223 02:52:12.504727       6 log.go:172] (0xc0025740b0) Data frame received for 3
I1223 02:52:12.504736       6 log.go:172] (0xc001800fa0) (3) Data frame handling
I1223 02:52:12.505049       6 log.go:172] (0xc0025740b0) Data frame received for 5
I1223 02:52:12.505080       6 log.go:172] (0xc0028ea780) (5) Data frame handling
I1223 02:52:12.506694       6 log.go:172] (0xc0025740b0) Data frame received for 1
I1223 02:52:12.506722       6 log.go:172] (0xc001800c80) (1) Data frame handling
I1223 02:52:12.506735       6 log.go:172] (0xc001800c80) (1) Data frame sent
I1223 02:52:12.506747       6 log.go:172] (0xc0025740b0) (0xc001800c80) Stream removed, broadcasting: 1
I1223 02:52:12.506764       6 log.go:172] (0xc0025740b0) Go away received
I1223 02:52:12.506868       6 log.go:172] (0xc0025740b0) (0xc001800c80) Stream removed, broadcasting: 1
I1223 02:52:12.506886       6 log.go:172] (0xc0025740b0) (0xc001800fa0) Stream removed, broadcasting: 3
I1223 02:52:12.506895       6 log.go:172] (0xc0025740b0) (0xc0028ea780) Stream removed, broadcasting: 5
Dec 23 02:52:12.506: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:52:12.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-997" for this suite.

• [SLOW TEST:20.826 seconds]
[sig-network] Networking
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3594,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:52:12.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:52:13.043: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:52:15.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288733, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288733, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288733, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288733, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:52:18.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:52:18.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:52:19.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-41" for this suite.
STEP: Destroying namespace "webhook-41-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.428 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":225,"skipped":3601,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:52:19.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Dec 23 02:52:26.595: INFO: Successfully updated pod "adopt-release-7rwvp"
STEP: Checking that the Job readopts the Pod
Dec 23 02:52:26.595: INFO: Waiting up to 15m0s for pod "adopt-release-7rwvp" in namespace "job-1010" to be "adopted"
Dec 23 02:52:26.599: INFO: Pod "adopt-release-7rwvp": Phase="Running", Reason="", readiness=true. Elapsed: 4.103864ms
Dec 23 02:52:28.603: INFO: Pod "adopt-release-7rwvp": Phase="Running", Reason="", readiness=true. Elapsed: 2.008334202s
Dec 23 02:52:28.604: INFO: Pod "adopt-release-7rwvp" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Dec 23 02:52:29.112: INFO: Successfully updated pod "adopt-release-7rwvp"
STEP: Checking that the Job releases the Pod
Dec 23 02:52:29.112: INFO: Waiting up to 15m0s for pod "adopt-release-7rwvp" in namespace "job-1010" to be "released"
Dec 23 02:52:29.130: INFO: Pod "adopt-release-7rwvp": Phase="Running", Reason="", readiness=true. Elapsed: 17.82074ms
Dec 23 02:52:31.134: INFO: Pod "adopt-release-7rwvp": Phase="Running", Reason="", readiness=true. Elapsed: 2.021873332s
Dec 23 02:52:31.134: INFO: Pod "adopt-release-7rwvp" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:52:31.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1010" for this suite.

• [SLOW TEST:11.199 seconds]
[sig-apps] Job
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":226,"skipped":3613,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:52:31.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-map-3cc35122-a784-414e-a0ac-91936863a62b
STEP: Creating a pod to test consume secrets
Dec 23 02:52:31.336: INFO: Waiting up to 5m0s for pod "pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6" in namespace "secrets-8463" to be "success or failure"
Dec 23 02:52:31.341: INFO: Pod "pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.467855ms
Dec 23 02:52:33.397: INFO: Pod "pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06103669s
Dec 23 02:52:35.401: INFO: Pod "pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064660338s
STEP: Saw pod success
Dec 23 02:52:35.401: INFO: Pod "pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6" satisfied condition "success or failure"
Dec 23 02:52:35.403: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6 container secret-volume-test: 
STEP: delete the pod
Dec 23 02:52:35.454: INFO: Waiting for pod pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6 to disappear
Dec 23 02:52:35.473: INFO: Pod pod-secrets-229578b9-ff6e-4f6b-b640-5911f8f6f0d6 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:52:35.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8463" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3623,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:52:35.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod pod-subpath-test-projected-rwt4
STEP: Creating a pod to test atomic-volume-subpath
Dec 23 02:52:35.572: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rwt4" in namespace "subpath-9676" to be "success or failure"
Dec 23 02:52:35.587: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.799301ms
Dec 23 02:52:37.591: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019248307s
Dec 23 02:52:39.595: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 4.022940727s
Dec 23 02:52:41.599: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 6.027027792s
Dec 23 02:52:43.602: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 8.030671203s
Dec 23 02:52:45.630: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 10.058890263s
Dec 23 02:52:47.635: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 12.062977395s
Dec 23 02:52:49.639: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 14.067100994s
Dec 23 02:52:51.643: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 16.071412436s
Dec 23 02:52:53.648: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 18.076609339s
Dec 23 02:52:55.653: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 20.08097467s
Dec 23 02:52:57.657: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Running", Reason="", readiness=true. Elapsed: 22.085180385s
Dec 23 02:52:59.660: INFO: Pod "pod-subpath-test-projected-rwt4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.08856519s
STEP: Saw pod success
Dec 23 02:52:59.660: INFO: Pod "pod-subpath-test-projected-rwt4" satisfied condition "success or failure"
Dec 23 02:52:59.663: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-rwt4 container test-container-subpath-projected-rwt4: 
STEP: delete the pod
Dec 23 02:52:59.691: INFO: Waiting for pod pod-subpath-test-projected-rwt4 to disappear
Dec 23 02:52:59.701: INFO: Pod pod-subpath-test-projected-rwt4 no longer exists
STEP: Deleting pod pod-subpath-test-projected-rwt4
Dec 23 02:52:59.702: INFO: Deleting pod "pod-subpath-test-projected-rwt4" in namespace "subpath-9676"
[AfterEach] [sig-storage] Subpath
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:52:59.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9676" for this suite.

• [SLOW TEST:24.229 seconds]
[sig-storage] Subpath
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":228,"skipped":3624,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:52:59.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:53:00.060: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"12dfd6b8-819e-4a01-88af-aa2469dac3a7", Controller:(*bool)(0xc006081282), BlockOwnerDeletion:(*bool)(0xc006081283)}}
Dec 23 02:53:00.208: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9e8098b0-7190-4ed9-9c45-269f9d33a0ad", Controller:(*bool)(0xc00608141a), BlockOwnerDeletion:(*bool)(0xc00608141b)}}
Dec 23 02:53:00.212: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c767d760-f6dc-4a73-8f1f-8e91d53743ce", Controller:(*bool)(0xc006156fe2), BlockOwnerDeletion:(*bool)(0xc006156fe3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:05.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6145" for this suite.

• [SLOW TEST:5.539 seconds]
[sig-api-machinery] Garbage collector
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":229,"skipped":3641,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:05.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:05.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7956" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":230,"skipped":3672,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:05.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5235
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-5235
I1223 02:53:05.488000       6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5235, replica count: 2
I1223 02:53:08.538390       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1223 02:53:11.538654       6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 23 02:53:11.538: INFO: Creating new exec pod
Dec 23 02:53:16.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5235 execpodl7rbh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Dec 23 02:53:19.597: INFO: stderr: "I1223 02:53:19.465446    2895 log.go:172] (0xc00082abb0) (0xc00063e780) Create stream\nI1223 02:53:19.465481    2895 log.go:172] (0xc00082abb0) (0xc00063e780) Stream added, broadcasting: 1\nI1223 02:53:19.468008    2895 log.go:172] (0xc00082abb0) Reply frame received for 1\nI1223 02:53:19.468060    2895 log.go:172] (0xc00082abb0) (0xc000367540) Create stream\nI1223 02:53:19.468075    2895 log.go:172] (0xc00082abb0) (0xc000367540) Stream added, broadcasting: 3\nI1223 02:53:19.469338    2895 log.go:172] (0xc00082abb0) Reply frame received for 3\nI1223 02:53:19.469391    2895 log.go:172] (0xc00082abb0) (0xc000754000) Create stream\nI1223 02:53:19.469402    2895 log.go:172] (0xc00082abb0) (0xc000754000) Stream added, broadcasting: 5\nI1223 02:53:19.470470    2895 log.go:172] (0xc00082abb0) Reply frame received for 5\nI1223 02:53:19.587069    2895 log.go:172] (0xc00082abb0) Data frame received for 5\nI1223 02:53:19.587099    2895 log.go:172] (0xc000754000) (5) Data frame handling\nI1223 02:53:19.587112    2895 log.go:172] (0xc000754000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1223 02:53:19.587562    2895 log.go:172] (0xc00082abb0) Data frame received for 5\nI1223 02:53:19.587588    2895 log.go:172] (0xc000754000) (5) Data frame handling\nI1223 02:53:19.587610    2895 log.go:172] (0xc000754000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1223 02:53:19.587739    2895 log.go:172] (0xc00082abb0) Data frame received for 3\nI1223 02:53:19.587778    2895 log.go:172] (0xc000367540) (3) Data frame handling\nI1223 02:53:19.587897    2895 log.go:172] (0xc00082abb0) Data frame received for 5\nI1223 02:53:19.587920    2895 log.go:172] (0xc000754000) (5) Data frame handling\nI1223 02:53:19.589633    2895 log.go:172] (0xc00082abb0) Data frame received for 1\nI1223 02:53:19.589665    2895 log.go:172] (0xc00063e780) (1) Data frame handling\nI1223 02:53:19.589696    2895 log.go:172] (0xc00063e780) (1) Data frame sent\nI1223 02:53:19.589733    2895 log.go:172] (0xc00082abb0) (0xc00063e780) Stream removed, broadcasting: 1\nI1223 02:53:19.589765    2895 log.go:172] (0xc00082abb0) Go away received\nI1223 02:53:19.590173    2895 log.go:172] (0xc00082abb0) (0xc00063e780) Stream removed, broadcasting: 1\nI1223 02:53:19.590192    2895 log.go:172] (0xc00082abb0) (0xc000367540) Stream removed, broadcasting: 3\nI1223 02:53:19.590201    2895 log.go:172] (0xc00082abb0) (0xc000754000) Stream removed, broadcasting: 5\n"
Dec 23 02:53:19.597: INFO: stdout: ""
Dec 23 02:53:19.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5235 execpodl7rbh -- /bin/sh -x -c nc -zv -t -w 2 10.105.136.17 80'
Dec 23 02:53:19.796: INFO: stderr: "I1223 02:53:19.734293    2929 log.go:172] (0xc0000f6d10) (0xc000613d60) Create stream\nI1223 02:53:19.734349    2929 log.go:172] (0xc0000f6d10) (0xc000613d60) Stream added, broadcasting: 1\nI1223 02:53:19.736784    2929 log.go:172] (0xc0000f6d10) Reply frame received for 1\nI1223 02:53:19.736822    2929 log.go:172] (0xc0000f6d10) (0xc000998000) Create stream\nI1223 02:53:19.736938    2929 log.go:172] (0xc0000f6d10) (0xc000998000) Stream added, broadcasting: 3\nI1223 02:53:19.737939    2929 log.go:172] (0xc0000f6d10) Reply frame received for 3\nI1223 02:53:19.737997    2929 log.go:172] (0xc0000f6d10) (0xc000613e00) Create stream\nI1223 02:53:19.738020    2929 log.go:172] (0xc0000f6d10) (0xc000613e00) Stream added, broadcasting: 5\nI1223 02:53:19.739312    2929 log.go:172] (0xc0000f6d10) Reply frame received for 5\nI1223 02:53:19.789784    2929 log.go:172] (0xc0000f6d10) Data frame received for 3\nI1223 02:53:19.789829    2929 log.go:172] (0xc000998000) (3) Data frame handling\nI1223 02:53:19.789856    2929 log.go:172] (0xc0000f6d10) Data frame received for 5\nI1223 02:53:19.789871    2929 log.go:172] (0xc000613e00) (5) Data frame handling\nI1223 02:53:19.789885    2929 log.go:172] (0xc000613e00) (5) Data frame sent\nI1223 02:53:19.789896    2929 log.go:172] (0xc0000f6d10) Data frame received for 5\nI1223 02:53:19.789904    2929 log.go:172] (0xc000613e00) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.136.17 80\nConnection to 10.105.136.17 80 port [tcp/http] succeeded!\nI1223 02:53:19.791255    2929 log.go:172] (0xc0000f6d10) Data frame received for 1\nI1223 02:53:19.791289    2929 log.go:172] (0xc000613d60) (1) Data frame handling\nI1223 02:53:19.791310    2929 log.go:172] (0xc000613d60) (1) Data frame sent\nI1223 02:53:19.791329    2929 log.go:172] (0xc0000f6d10) (0xc000613d60) Stream removed, broadcasting: 1\nI1223 02:53:19.791404    2929 log.go:172] (0xc0000f6d10) Go away received\nI1223 02:53:19.791752    2929 log.go:172] (0xc0000f6d10) (0xc000613d60) Stream removed, broadcasting: 1\nI1223 02:53:19.791780    2929 log.go:172] (0xc0000f6d10) (0xc000998000) Stream removed, broadcasting: 3\nI1223 02:53:19.791791    2929 log.go:172] (0xc0000f6d10) (0xc000613e00) Stream removed, broadcasting: 5\n"
Dec 23 02:53:19.796: INFO: stdout: ""
Dec 23 02:53:19.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5235 execpodl7rbh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.9 31448'
Dec 23 02:53:19.996: INFO: stderr: "I1223 02:53:19.918841    2949 log.go:172] (0xc000206370) (0xc0003c95e0) Create stream\nI1223 02:53:19.918884    2949 log.go:172] (0xc000206370) (0xc0003c95e0) Stream added, broadcasting: 1\nI1223 02:53:19.920937    2949 log.go:172] (0xc000206370) Reply frame received for 1\nI1223 02:53:19.920986    2949 log.go:172] (0xc000206370) (0xc000a2c000) Create stream\nI1223 02:53:19.921001    2949 log.go:172] (0xc000206370) (0xc000a2c000) Stream added, broadcasting: 3\nI1223 02:53:19.921828    2949 log.go:172] (0xc000206370) Reply frame received for 3\nI1223 02:53:19.921844    2949 log.go:172] (0xc000206370) (0xc000a2c0a0) Create stream\nI1223 02:53:19.921850    2949 log.go:172] (0xc000206370) (0xc000a2c0a0) Stream added, broadcasting: 5\nI1223 02:53:19.922653    2949 log.go:172] (0xc000206370) Reply frame received for 5\nI1223 02:53:19.988063    2949 log.go:172] (0xc000206370) Data frame received for 5\nI1223 02:53:19.988098    2949 log.go:172] (0xc000a2c0a0) (5) Data frame handling\nI1223 02:53:19.988109    2949 log.go:172] (0xc000a2c0a0) (5) Data frame sent\nI1223 02:53:19.988116    2949 log.go:172] (0xc000206370) Data frame received for 5\nI1223 02:53:19.988124    2949 log.go:172] (0xc000a2c0a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.9 31448\nConnection to 172.18.0.9 31448 port [tcp/31448] succeeded!\nI1223 02:53:19.988146    2949 log.go:172] (0xc000206370) Data frame received for 3\nI1223 02:53:19.988152    2949 log.go:172] (0xc000a2c000) (3) Data frame handling\nI1223 02:53:19.989436    2949 log.go:172] (0xc000206370) Data frame received for 1\nI1223 02:53:19.989466    2949 log.go:172] (0xc0003c95e0) (1) Data frame handling\nI1223 02:53:19.989483    2949 log.go:172] (0xc0003c95e0) (1) Data frame sent\nI1223 02:53:19.989500    2949 log.go:172] (0xc000206370) (0xc0003c95e0) Stream removed, broadcasting: 1\nI1223 02:53:19.989522    2949 log.go:172] (0xc000206370) Go away received\nI1223 02:53:19.990023    2949 log.go:172] (0xc000206370) (0xc0003c95e0) Stream removed, broadcasting: 1\nI1223 02:53:19.990040    2949 log.go:172] (0xc000206370) (0xc000a2c000) Stream removed, broadcasting: 3\nI1223 02:53:19.990057    2949 log.go:172] (0xc000206370) (0xc000a2c0a0) Stream removed, broadcasting: 5\n"
Dec 23 02:53:19.996: INFO: stdout: ""
Dec 23 02:53:19.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5235 execpodl7rbh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.10 31448'
Dec 23 02:53:20.189: INFO: stderr: "I1223 02:53:20.112390    2970 log.go:172] (0xc000b00580) (0xc0006f3d60) Create stream\nI1223 02:53:20.112441    2970 log.go:172] (0xc000b00580) (0xc0006f3d60) Stream added, broadcasting: 1\nI1223 02:53:20.115087    2970 log.go:172] (0xc000b00580) Reply frame received for 1\nI1223 02:53:20.115141    2970 log.go:172] (0xc000b00580) (0xc00061e640) Create stream\nI1223 02:53:20.115159    2970 log.go:172] (0xc000b00580) (0xc00061e640) Stream added, broadcasting: 3\nI1223 02:53:20.116067    2970 log.go:172] (0xc000b00580) Reply frame received for 3\nI1223 02:53:20.116103    2970 log.go:172] (0xc000b00580) (0xc000713400) Create stream\nI1223 02:53:20.116114    2970 log.go:172] (0xc000b00580) (0xc000713400) Stream added, broadcasting: 5\nI1223 02:53:20.117182    2970 log.go:172] (0xc000b00580) Reply frame received for 5\nI1223 02:53:20.178161    2970 log.go:172] (0xc000b00580) Data frame received for 5\nI1223 02:53:20.178202    2970 log.go:172] (0xc000713400) (5) Data frame handling\nI1223 02:53:20.178227    2970 log.go:172] (0xc000713400) (5) Data frame sent\nI1223 02:53:20.178240    2970 log.go:172] (0xc000b00580) Data frame received for 5\nI1223 02:53:20.178255    2970 log.go:172] (0xc000713400) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.10 31448\nConnection to 172.18.0.10 31448 port [tcp/31448] succeeded!\nI1223 02:53:20.178284    2970 log.go:172] (0xc000713400) (5) Data frame sent\nI1223 02:53:20.178524    2970 log.go:172] (0xc000b00580) Data frame received for 3\nI1223 02:53:20.178540    2970 log.go:172] (0xc00061e640) (3) Data frame handling\nI1223 02:53:20.178611    2970 log.go:172] (0xc000b00580) Data frame received for 5\nI1223 02:53:20.178651    2970 log.go:172] (0xc000713400) (5) Data frame handling\nI1223 02:53:20.180380    2970 log.go:172] (0xc000b00580) Data frame received for 1\nI1223 02:53:20.180401    2970 log.go:172] (0xc0006f3d60) (1) Data frame handling\nI1223 02:53:20.180410    2970 log.go:172] (0xc0006f3d60) (1) Data frame sent\nI1223 02:53:20.180431    2970 log.go:172] (0xc000b00580) (0xc0006f3d60) Stream removed, broadcasting: 1\nI1223 02:53:20.180472    2970 log.go:172] (0xc000b00580) Go away received\nI1223 02:53:20.180971    2970 log.go:172] (0xc000b00580) (0xc0006f3d60) Stream removed, broadcasting: 1\nI1223 02:53:20.180991    2970 log.go:172] (0xc000b00580) (0xc00061e640) Stream removed, broadcasting: 3\nI1223 02:53:20.180999    2970 log.go:172] (0xc000b00580) (0xc000713400) Stream removed, broadcasting: 5\n"
Dec 23 02:53:20.189: INFO: stdout: ""
Dec 23 02:53:20.189: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:20.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5235" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:14.926 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":231,"skipped":3718,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:20.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 23 02:53:20.412: INFO: Waiting up to 5m0s for pod "pod-4d3f1682-c67a-4139-acdd-c59bae81af5a" in namespace "emptydir-4345" to be "success or failure"
Dec 23 02:53:20.439: INFO: Pod "pod-4d3f1682-c67a-4139-acdd-c59bae81af5a": Phase="Pending", Reason="", readiness=false. Elapsed: 26.99114ms
Dec 23 02:53:22.443: INFO: Pod "pod-4d3f1682-c67a-4139-acdd-c59bae81af5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030968127s
Dec 23 02:53:24.447: INFO: Pod "pod-4d3f1682-c67a-4139-acdd-c59bae81af5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035004613s
STEP: Saw pod success
Dec 23 02:53:24.447: INFO: Pod "pod-4d3f1682-c67a-4139-acdd-c59bae81af5a" satisfied condition "success or failure"
Dec 23 02:53:24.451: INFO: Trying to get logs from node jerma-worker2 pod pod-4d3f1682-c67a-4139-acdd-c59bae81af5a container test-container: 
STEP: delete the pod
Dec 23 02:53:24.494: INFO: Waiting for pod pod-4d3f1682-c67a-4139-acdd-c59bae81af5a to disappear
Dec 23 02:53:24.505: INFO: Pod pod-4d3f1682-c67a-4139-acdd-c59bae81af5a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:24.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4345" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3730,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:24.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 23 02:53:24.634: INFO: Waiting up to 5m0s for pod "downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882" in namespace "downward-api-1423" to be "success or failure"
Dec 23 02:53:24.643: INFO: Pod "downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882": Phase="Pending", Reason="", readiness=false. Elapsed: 8.919913ms
Dec 23 02:53:26.649: INFO: Pod "downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014765104s
Dec 23 02:53:28.654: INFO: Pod "downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019818875s
STEP: Saw pod success
Dec 23 02:53:28.654: INFO: Pod "downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882" satisfied condition "success or failure"
Dec 23 02:53:28.656: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882 container dapi-container: 
STEP: delete the pod
Dec 23 02:53:29.076: INFO: Waiting for pod downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882 to disappear
Dec 23 02:53:29.085: INFO: Pod downward-api-6cecac67-fe7a-4b4c-a970-8d28f0915882 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:29.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1423" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3733,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:29.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with configMap that has name projected-configmap-test-upd-f5b1fdeb-4021-4e00-bf8a-e5bd461538f3
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-f5b1fdeb-4021-4e00-bf8a-e5bd461538f3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8785" for this suite.

• [SLOW TEST:6.274 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3743,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:35.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:51.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1198" for this suite.

• [SLOW TEST:16.377 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":235,"skipped":3777,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:51.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 23 02:53:51.823: INFO: Waiting up to 5m0s for pod "downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669" in namespace "downward-api-193" to be "success or failure"
Dec 23 02:53:51.847: INFO: Pod "downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669": Phase="Pending", Reason="", readiness=false. Elapsed: 23.9378ms
Dec 23 02:53:53.919: INFO: Pod "downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095508617s
Dec 23 02:53:55.923: INFO: Pod "downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099664345s
STEP: Saw pod success
Dec 23 02:53:55.923: INFO: Pod "downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669" satisfied condition "success or failure"
Dec 23 02:53:55.926: INFO: Trying to get logs from node jerma-worker2 pod downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669 container dapi-container: 
STEP: delete the pod
Dec 23 02:53:56.000: INFO: Waiting for pod downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669 to disappear
Dec 23 02:53:56.003: INFO: Pod downward-api-fad6eac4-029b-46f1-8307-16c36b2c9669 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:53:56.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-193" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3778,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:53:56.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir volume type on tmpfs
Dec 23 02:53:56.071: INFO: Waiting up to 5m0s for pod "pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006" in namespace "emptydir-592" to be "success or failure"
Dec 23 02:53:56.074: INFO: Pod "pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006": Phase="Pending", Reason="", readiness=false. Elapsed: 3.373978ms
Dec 23 02:53:58.093: INFO: Pod "pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021880725s
Dec 23 02:54:00.098: INFO: Pod "pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027620138s
STEP: Saw pod success
Dec 23 02:54:00.098: INFO: Pod "pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006" satisfied condition "success or failure"
Dec 23 02:54:00.102: INFO: Trying to get logs from node jerma-worker pod pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006 container test-container: 
STEP: delete the pod
Dec 23 02:54:00.153: INFO: Waiting for pod pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006 to disappear
Dec 23 02:54:00.158: INFO: Pod pod-7aaea5cb-d214-4d3f-8117-5dbf1e71a006 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:00.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-592" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3804,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:00.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:04.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9617" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3824,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:04.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 23 02:54:10.030: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:10.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8683" for this suite.

• [SLOW TEST:5.686 seconds]
[k8s.io] Container Runtime
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  blackbox test
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3857,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:10.287: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name projected-configmap-test-volume-f90e358f-6e69-41d4-b112-cb2d85d6c2f5
STEP: Creating a pod to test consume configMaps
Dec 23 02:54:10.365: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66" in namespace "projected-1727" to be "success or failure"
Dec 23 02:54:10.370: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312258ms
Dec 23 02:54:12.373: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008076813s
Dec 23 02:54:14.474: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108632318s
Dec 23 02:54:16.817: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66": Phase="Running", Reason="", readiness=true. Elapsed: 6.451680964s
Dec 23 02:54:18.824: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.458317341s
STEP: Saw pod success
Dec 23 02:54:18.824: INFO: Pod "pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66" satisfied condition "success or failure"
Dec 23 02:54:18.826: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 23 02:54:18.842: INFO: Waiting for pod pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66 to disappear
Dec 23 02:54:18.847: INFO: Pod pod-projected-configmaps-fbba1edc-7b09-4cbb-a912-8d687ebe6c66 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:18.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1727" for this suite.

• [SLOW TEST:8.585 seconds]
[sig-storage] Projected configMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3865,"failed":0}
SSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:18.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name secret-emptykey-test-1e41c797-015c-4ee0-b590-7cae1e93ed0b
[AfterEach] [sig-api-machinery] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:18.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1628" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":241,"skipped":3868,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:19.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 02:54:19.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab" in namespace "downward-api-8759" to be "success or failure"
Dec 23 02:54:19.135: INFO: Pod "downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.29322ms
Dec 23 02:54:21.196: INFO: Pod "downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071961802s
Dec 23 02:54:23.584: INFO: Pod "downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.460912995s
STEP: Saw pod success
Dec 23 02:54:23.585: INFO: Pod "downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab" satisfied condition "success or failure"
Dec 23 02:54:23.588: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab container client-container: 
STEP: delete the pod
Dec 23 02:54:24.042: INFO: Waiting for pod downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab to disappear
Dec 23 02:54:24.297: INFO: Pod downwardapi-volume-30e4eef2-c6f7-428c-836c-7bd8ece8c1ab no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:24.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8759" for this suite.

• [SLOW TEST:5.319 seconds]
[sig-storage] Downward API volume
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3884,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:24.322: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating pod
Dec 23 02:54:33.775: INFO: Pod pod-hostip-a0274b3c-fcb6-4e5f-abd7-2fd14fb8c1d5 has hostIP: 172.18.0.10
[AfterEach] [k8s.io] Pods
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:33.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7902" for this suite.

• [SLOW TEST:9.457 seconds]
[k8s.io] Pods
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should get a host IP [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3899,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:33.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:54:34.303: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:54:36.412: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:54:38.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288874, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:54:41.925: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:41.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1598" for this suite.
STEP: Destroying namespace "webhook-1598-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.801 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":244,"skipped":3920,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:42.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 23 02:54:43.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 23 02:54:45.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288884, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:54:47.993: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288884, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288883, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 23 02:54:51.016: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:51.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6515" for this suite.
STEP: Destroying namespace "webhook-6515-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.700 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":245,"skipped":3958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:51.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:54:51.385: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001" in namespace "security-context-test-8846" to be "success or failure"
Dec 23 02:54:51.401: INFO: Pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001": Phase="Pending", Reason="", readiness=false. Elapsed: 15.872771ms
Dec 23 02:54:53.406: INFO: Pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020411711s
Dec 23 02:54:55.452: INFO: Pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066963383s
Dec 23 02:54:57.456: INFO: Pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07008736s
Dec 23 02:54:57.456: INFO: Pod "alpine-nnp-false-fe991528-d0eb-4754-8d7c-0afcae66b001" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:54:57.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8846" for this suite.

• [SLOW TEST:6.186 seconds]
[k8s.io] Security Context
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  when creating containers with AllowPrivilegeEscalation
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3980,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:54:57.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward api env vars
Dec 23 02:54:57.591: INFO: Waiting up to 5m0s for pod "downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750" in namespace "downward-api-2734" to be "success or failure"
Dec 23 02:54:57.603: INFO: Pod "downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750": Phase="Pending", Reason="", readiness=false. Elapsed: 11.71412ms
Dec 23 02:54:59.606: INFO: Pod "downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014733606s
Dec 23 02:55:01.610: INFO: Pod "downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018875762s
STEP: Saw pod success
Dec 23 02:55:01.610: INFO: Pod "downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750" satisfied condition "success or failure"
Dec 23 02:55:01.613: INFO: Trying to get logs from node jerma-worker2 pod downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750 container dapi-container: 
STEP: delete the pod
Dec 23 02:55:01.649: INFO: Waiting for pod downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750 to disappear
Dec 23 02:55:01.686: INFO: Pod downward-api-9d78bd07-a754-4435-99c5-e5b884cb7750 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:55:01.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2734" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3995,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:55:01.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:55:01.739: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 23 02:55:01.759: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 23 02:55:06.763: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 23 02:55:06.764: INFO: Creating deployment "test-rolling-update-deployment"
Dec 23 02:55:06.831: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 23 02:55:06.884: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 23 02:55:08.891: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 23 02:55:08.894: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288907, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288907, loc:(*time.Location)(0x791c680)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288907, loc:(*time.Location)(0x791c680)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63744288906, loc:(*time.Location)(0x791c680)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 23 02:55:10.897: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63
Dec 23 02:55:10.906: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1595 /apis/apps/v1/namespaces/deployment-1595/deployments/test-rolling-update-deployment 07d4a682-fe95-4985-bb74-80592f42a4b7 23947964 1 2020-12-23 02:55:06 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003605a78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-12-23 02:55:07 +0000 UTC,LastTransitionTime:2020-12-23 02:55:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-12-23 02:55:10 +0000 UTC,LastTransitionTime:2020-12-23 02:55:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 23 02:55:10.908: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-1595 /apis/apps/v1/namespaces/deployment-1595/replicasets/test-rolling-update-deployment-67cf4f6444 c4e53068-b78d-415f-a567-5df8dc79da26 23947951 1 2020-12-23 02:55:06 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 07d4a682-fe95-4985-bb74-80592f42a4b7 0xc003605f47 0xc003605f48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003605fd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:55:10.908: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 23 02:55:10.909: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1595 /apis/apps/v1/namespaces/deployment-1595/replicasets/test-rolling-update-controller fe4ab10f-1e4d-4e21-ba7b-a053602df84c 23947961 2 2020-12-23 02:55:01 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 07d4a682-fe95-4985-bb74-80592f42a4b7 0xc003605e77 0xc003605e78}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003605ed8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 23 02:55:10.911: INFO: Pod "test-rolling-update-deployment-67cf4f6444-r9tm8" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-r9tm8 test-rolling-update-deployment-67cf4f6444- deployment-1595 /api/v1/namespaces/deployment-1595/pods/test-rolling-update-deployment-67cf4f6444-r9tm8 f0874651-58cc-4204-8e86-d326c19df4fe 23947950 0 2020-12-23 02:55:06 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 c4e53068-b78d-415f-a567-5df8dc79da26 0xc0060e8427 0xc0060e8428}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5422q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5422q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5422q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:55:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:55:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-12-23 02:55:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.10,PodIP:10.244.1.51,StartTime:2020-12-23 02:55:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-12-23 02:55:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://124d4540381ca69be85a26b868564a4cd7e3af64015378834fb98ffd93707218,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:55:10.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1595" for this suite.

• [SLOW TEST:9.223 seconds]
[sig-apps] Deployment
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":248,"skipped":4004,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:55:10.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating pod busybox-74b58c6d-a780-4a30-9907-9dbd3a83312d in namespace container-probe-3202
Dec 23 02:55:15.010: INFO: Started pod busybox-74b58c6d-a780-4a30-9907-9dbd3a83312d in namespace container-probe-3202
STEP: checking the pod's current state and verifying that restartCount is present
Dec 23 02:55:15.012: INFO: Initial restart count of pod busybox-74b58c6d-a780-4a30-9907-9dbd3a83312d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:59:15.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3202" for this suite.

• [SLOW TEST:244.770 seconds]
[k8s.io] Probing container
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4005,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:59:15.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating server pod server in namespace prestop-8494
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-8494
STEP: Deleting pre-stop pod
Dec 23 02:59:28.816: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:59:28.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-8494" for this suite.

• [SLOW TEST:13.171 seconds]
[k8s.io] [sig-node] PreStop
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should call prestop when killing a pod  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":250,"skipped":4014,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:59:28.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 02:59:28.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Dec 23 02:59:31.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 create -f -'
Dec 23 02:59:35.202: INFO: stderr: ""
Dec 23 02:59:35.202: INFO: stdout: "e2e-test-crd-publish-openapi-7882-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Dec 23 02:59:35.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 delete e2e-test-crd-publish-openapi-7882-crds test-foo'
Dec 23 02:59:35.307: INFO: stderr: ""
Dec 23 02:59:35.307: INFO: stdout: "e2e-test-crd-publish-openapi-7882-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Dec 23 02:59:35.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 apply -f -'
Dec 23 02:59:35.569: INFO: stderr: ""
Dec 23 02:59:35.569: INFO: stdout: "e2e-test-crd-publish-openapi-7882-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Dec 23 02:59:35.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 delete e2e-test-crd-publish-openapi-7882-crds test-foo'
Dec 23 02:59:35.686: INFO: stderr: ""
Dec 23 02:59:35.686: INFO: stdout: "e2e-test-crd-publish-openapi-7882-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Dec 23 02:59:35.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 create -f -'
Dec 23 02:59:35.923: INFO: rc: 1
Dec 23 02:59:35.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 apply -f -'
Dec 23 02:59:36.161: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Dec 23 02:59:36.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 create -f -'
Dec 23 02:59:36.390: INFO: rc: 1
Dec 23 02:59:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-253 apply -f -'
Dec 23 02:59:36.642: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Dec 23 02:59:36.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7882-crds'
Dec 23 02:59:36.902: INFO: stderr: ""
Dec 23 02:59:36.902: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7882-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Dec 23 02:59:36.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7882-crds.metadata'
Dec 23 02:59:37.186: INFO: stderr: ""
Dec 23 02:59:37.186: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7882-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Dec 23 02:59:37.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7882-crds.spec'
Dec 23 02:59:37.400: INFO: stderr: ""
Dec 23 02:59:37.400: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7882-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Dec 23 02:59:37.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7882-crds.spec.bars'
Dec 23 02:59:37.653: INFO: stderr: ""
Dec 23 02:59:37.653: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7882-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Dec 23 02:59:37.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7882-crds.spec.bars2'
Dec 23 02:59:37.915: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:59:40.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-253" for this suite.

• [SLOW TEST:11.936 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":251,"skipped":4040,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:59:40.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[AfterEach] [k8s.io] Kubelet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 02:59:44.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9881" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4048,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 02:59:44.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:00:02.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6266" for this suite.

• [SLOW TEST:17.147 seconds]
[sig-api-machinery] ResourceQuota
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":253,"skipped":4081,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:00:02.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating the pod
Dec 23 03:00:02.163: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:00:09.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1583" for this suite.

• [SLOW TEST:7.608 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":254,"skipped":4104,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:00:09.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating service endpoint-test2 in namespace services-1155
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1155 to expose endpoints map[]
Dec 23 03:00:09.819: INFO: Get endpoints failed (15.400714ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 23 03:00:10.822: INFO: successfully validated that service endpoint-test2 in namespace services-1155 exposes endpoints map[] (1.018118142s elapsed)
STEP: Creating pod pod1 in namespace services-1155
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1155 to expose endpoints map[pod1:[80]]
Dec 23 03:00:13.922: INFO: successfully validated that service endpoint-test2 in namespace services-1155 exposes endpoints map[pod1:[80]] (3.095115139s elapsed)
STEP: Creating pod pod2 in namespace services-1155
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1155 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 23 03:00:18.038: INFO: successfully validated that service endpoint-test2 in namespace services-1155 exposes endpoints map[pod1:[80] pod2:[80]] (4.112324615s elapsed)
STEP: Deleting pod pod1 in namespace services-1155
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1155 to expose endpoints map[pod2:[80]]
Dec 23 03:00:19.090: INFO: successfully validated that service endpoint-test2 in namespace services-1155 exposes endpoints map[pod2:[80]] (1.047428426s elapsed)
STEP: Deleting pod pod2 in namespace services-1155
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1155 to expose endpoints map[]
Dec 23 03:00:20.116: INFO: successfully validated that service endpoint-test2 in namespace services-1155 exposes endpoints map[] (1.022126519s elapsed)
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:00:20.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1155" for this suite.
[AfterEach] [sig-network] Services
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143

• [SLOW TEST:10.532 seconds]
[sig-network] Services
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":255,"skipped":4120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:00:20.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-volume-09c550cc-a5a8-44a3-a5cf-ec8e5e50b811
STEP: Creating a pod to test consume configMaps
Dec 23 03:00:20.780: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15" in namespace "configmap-8602" to be "success or failure"
Dec 23 03:00:20.925: INFO: Pod "pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15": Phase="Pending", Reason="", readiness=false. Elapsed: 145.235397ms
Dec 23 03:00:22.928: INFO: Pod "pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148583844s
Dec 23 03:00:24.932: INFO: Pod "pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152386361s
STEP: Saw pod success
Dec 23 03:00:24.932: INFO: Pod "pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15" satisfied condition "success or failure"
Dec 23 03:00:24.935: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15 container configmap-volume-test: 
STEP: delete the pod
Dec 23 03:00:24.999: INFO: Waiting for pod pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15 to disappear
Dec 23 03:00:25.010: INFO: Pod pod-configmaps-d0eb7261-1a03-4c03-9b52-594b6663fd15 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:00:25.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8602" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4195,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:00:25.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-5421
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating stateful set ss in namespace statefulset-5421
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5421
Dec 23 03:00:25.374: INFO: Found 0 stateful pods, waiting for 1
Dec 23 03:00:35.378: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 23 03:00:35.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 03:00:35.661: INFO: stderr: "I1223 03:00:35.518644    3287 log.go:172] (0xc0003866e0) (0xc0009ce1e0) Create stream\nI1223 03:00:35.518695    3287 log.go:172] (0xc0003866e0) (0xc0009ce1e0) Stream added, broadcasting: 1\nI1223 03:00:35.521166    3287 log.go:172] (0xc0003866e0) Reply frame received for 1\nI1223 03:00:35.521211    3287 log.go:172] (0xc0003866e0) (0xc00053e6e0) Create stream\nI1223 03:00:35.521228    3287 log.go:172] (0xc0003866e0) (0xc00053e6e0) Stream added, broadcasting: 3\nI1223 03:00:35.522069    3287 log.go:172] (0xc0003866e0) Reply frame received for 3\nI1223 03:00:35.522099    3287 log.go:172] (0xc0003866e0) (0xc0009ce280) Create stream\nI1223 03:00:35.522118    3287 log.go:172] (0xc0003866e0) (0xc0009ce280) Stream added, broadcasting: 5\nI1223 03:00:35.522865    3287 log.go:172] (0xc0003866e0) Reply frame received for 5\nI1223 03:00:35.623983    3287 log.go:172] (0xc0003866e0) Data frame received for 5\nI1223 03:00:35.624009    3287 log.go:172] (0xc0009ce280) (5) Data frame handling\nI1223 03:00:35.624023    3287 log.go:172] (0xc0009ce280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 03:00:35.647845    3287 log.go:172] (0xc0003866e0) Data frame received for 3\nI1223 03:00:35.647871    3287 log.go:172] (0xc00053e6e0) (3) Data frame handling\nI1223 03:00:35.647891    3287 log.go:172] (0xc00053e6e0) (3) Data frame sent\nI1223 03:00:35.647918    3287 log.go:172] (0xc0003866e0) Data frame received for 3\nI1223 03:00:35.647928    3287 log.go:172] (0xc00053e6e0) (3) Data frame handling\nI1223 03:00:35.648130    3287 log.go:172] (0xc0003866e0) Data frame received for 5\nI1223 03:00:35.648161    3287 log.go:172] (0xc0009ce280) (5) Data frame handling\nI1223 03:00:35.650278    3287 log.go:172] (0xc0003866e0) Data frame received for 1\nI1223 03:00:35.650312    3287 log.go:172] (0xc0009ce1e0) (1) Data frame handling\nI1223 03:00:35.650344    3287 log.go:172] (0xc0009ce1e0) (1) Data frame sent\nI1223 03:00:35.650366    3287 log.go:172] (0xc0003866e0) (0xc0009ce1e0) Stream removed, broadcasting: 1\nI1223 03:00:35.650398    3287 log.go:172] (0xc0003866e0) Go away received\nI1223 03:00:35.650789    3287 log.go:172] (0xc0003866e0) (0xc0009ce1e0) Stream removed, broadcasting: 1\nI1223 03:00:35.650817    3287 log.go:172] (0xc0003866e0) (0xc00053e6e0) Stream removed, broadcasting: 3\nI1223 03:00:35.650831    3287 log.go:172] (0xc0003866e0) (0xc0009ce280) Stream removed, broadcasting: 5\n"
Dec 23 03:00:35.661: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 03:00:35.661: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 03:00:35.664: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 23 03:00:45.669: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 03:00:45.669: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 03:00:45.688: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Dec 23 03:00:45.688: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  }]
Dec 23 03:00:45.688: INFO: 
Dec 23 03:00:45.688: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 23 03:00:46.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990426344s
Dec 23 03:00:47.754: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985658987s
Dec 23 03:00:48.885: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.924056353s
Dec 23 03:00:49.888: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.793707478s
Dec 23 03:00:50.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.789810553s
Dec 23 03:00:51.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.784328658s
Dec 23 03:00:52.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.77987021s
Dec 23 03:00:53.908: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.77502725s
Dec 23 03:00:54.913: INFO: Verifying statefulset ss doesn't scale past 3 for another 770.521888ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5421
Dec 23 03:00:55.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 03:00:56.189: INFO: stderr: "I1223 03:00:56.098107    3310 log.go:172] (0xc0000f7600) (0xc000944000) Create stream\nI1223 03:00:56.098157    3310 log.go:172] (0xc0000f7600) (0xc000944000) Stream added, broadcasting: 1\nI1223 03:00:56.100629    3310 log.go:172] (0xc0000f7600) Reply frame received for 1\nI1223 03:00:56.100670    3310 log.go:172] (0xc0000f7600) (0xc0009440a0) Create stream\nI1223 03:00:56.100680    3310 log.go:172] (0xc0000f7600) (0xc0009440a0) Stream added, broadcasting: 3\nI1223 03:00:56.101823    3310 log.go:172] (0xc0000f7600) Reply frame received for 3\nI1223 03:00:56.101861    3310 log.go:172] (0xc0000f7600) (0xc000659ae0) Create stream\nI1223 03:00:56.101880    3310 log.go:172] (0xc0000f7600) (0xc000659ae0) Stream added, broadcasting: 5\nI1223 03:00:56.102811    3310 log.go:172] (0xc0000f7600) Reply frame received for 5\nI1223 03:00:56.179626    3310 log.go:172] (0xc0000f7600) Data frame received for 3\nI1223 03:00:56.179680    3310 log.go:172] (0xc0009440a0) (3) Data frame handling\nI1223 03:00:56.179701    3310 log.go:172] (0xc0009440a0) (3) Data frame sent\nI1223 03:00:56.179716    3310 log.go:172] (0xc0000f7600) Data frame received for 3\nI1223 03:00:56.179728    3310 log.go:172] (0xc0009440a0) (3) Data frame handling\nI1223 03:00:56.179751    3310 log.go:172] (0xc0000f7600) Data frame received for 5\nI1223 03:00:56.179767    3310 log.go:172] (0xc000659ae0) (5) Data frame handling\nI1223 03:00:56.179775    3310 log.go:172] (0xc000659ae0) (5) Data frame sent\nI1223 03:00:56.179782    3310 log.go:172] (0xc0000f7600) Data frame received for 5\nI1223 03:00:56.179786    3310 log.go:172] (0xc000659ae0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1223 03:00:56.181045    3310 log.go:172] (0xc0000f7600) Data frame received for 1\nI1223 03:00:56.181076    3310 log.go:172] (0xc000944000) (1) Data frame handling\nI1223 03:00:56.181108    3310 log.go:172] (0xc000944000) (1) Data frame sent\nI1223 03:00:56.181126    3310 log.go:172] (0xc0000f7600) (0xc000944000) Stream removed, broadcasting: 1\nI1223 03:00:56.181260    3310 log.go:172] (0xc0000f7600) Go away received\nI1223 03:00:56.181555    3310 log.go:172] (0xc0000f7600) (0xc000944000) Stream removed, broadcasting: 1\nI1223 03:00:56.181574    3310 log.go:172] (0xc0000f7600) (0xc0009440a0) Stream removed, broadcasting: 3\nI1223 03:00:56.181587    3310 log.go:172] (0xc0000f7600) (0xc000659ae0) Stream removed, broadcasting: 5\n"
Dec 23 03:00:56.189: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 03:00:56.189: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 03:00:56.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 03:00:56.394: INFO: stderr: "I1223 03:00:56.321212    3333 log.go:172] (0xc0009e3ad0) (0xc0009b2780) Create stream\nI1223 03:00:56.321256    3333 log.go:172] (0xc0009e3ad0) (0xc0009b2780) Stream added, broadcasting: 1\nI1223 03:00:56.325521    3333 log.go:172] (0xc0009e3ad0) Reply frame received for 1\nI1223 03:00:56.325582    3333 log.go:172] (0xc0009e3ad0) (0xc000676640) Create stream\nI1223 03:00:56.325599    3333 log.go:172] (0xc0009e3ad0) (0xc000676640) Stream added, broadcasting: 3\nI1223 03:00:56.326547    3333 log.go:172] (0xc0009e3ad0) Reply frame received for 3\nI1223 03:00:56.326578    3333 log.go:172] (0xc0009e3ad0) (0xc000799360) Create stream\nI1223 03:00:56.326585    3333 log.go:172] (0xc0009e3ad0) (0xc000799360) Stream added, broadcasting: 5\nI1223 03:00:56.327508    3333 log.go:172] (0xc0009e3ad0) Reply frame received for 5\nI1223 03:00:56.381888    3333 log.go:172] (0xc0009e3ad0) Data frame received for 3\nI1223 03:00:56.381923    3333 log.go:172] (0xc000676640) (3) Data frame handling\nI1223 03:00:56.381937    3333 log.go:172] (0xc000676640) (3) Data frame sent\nI1223 03:00:56.381944    3333 log.go:172] (0xc0009e3ad0) Data frame received for 3\nI1223 03:00:56.381958    3333 log.go:172] (0xc000676640) (3) Data frame handling\nI1223 03:00:56.382233    3333 log.go:172] (0xc0009e3ad0) Data frame received for 5\nI1223 03:00:56.382250    3333 log.go:172] (0xc000799360) (5) Data frame handling\nI1223 03:00:56.382259    3333 log.go:172] (0xc000799360) (5) Data frame sent\nI1223 03:00:56.382265    3333 log.go:172] (0xc0009e3ad0) Data frame received for 5\nI1223 03:00:56.382271    3333 log.go:172] (0xc000799360) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1223 03:00:56.383749    3333 log.go:172] (0xc0009e3ad0) Data frame received for 1\nI1223 03:00:56.383762    3333 log.go:172] (0xc0009b2780) (1) Data frame handling\nI1223 03:00:56.383776    3333 log.go:172] (0xc0009b2780) (1) Data frame sent\nI1223 03:00:56.383791    3333 log.go:172] (0xc0009e3ad0) (0xc0009b2780) Stream removed, broadcasting: 1\nI1223 03:00:56.383801    3333 log.go:172] (0xc0009e3ad0) Go away received\nI1223 03:00:56.384096    3333 log.go:172] (0xc0009e3ad0) (0xc0009b2780) Stream removed, broadcasting: 1\nI1223 03:00:56.384109    3333 log.go:172] (0xc0009e3ad0) (0xc000676640) Stream removed, broadcasting: 3\nI1223 03:00:56.384115    3333 log.go:172] (0xc0009e3ad0) (0xc000799360) Stream removed, broadcasting: 5\n"
Dec 23 03:00:56.394: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 03:00:56.394: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 03:00:56.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 23 03:00:56.608: INFO: stderr: "I1223 03:00:56.519741    3356 log.go:172] (0xc00079ee70) (0xc0008a3ea0) Create stream\nI1223 03:00:56.519797    3356 log.go:172] (0xc00079ee70) (0xc0008a3ea0) Stream added, broadcasting: 1\nI1223 03:00:56.522370    3356 log.go:172] (0xc00079ee70) Reply frame received for 1\nI1223 03:00:56.522423    3356 log.go:172] (0xc00079ee70) (0xc0007190e0) Create stream\nI1223 03:00:56.522441    3356 log.go:172] (0xc00079ee70) (0xc0007190e0) Stream added, broadcasting: 3\nI1223 03:00:56.523381    3356 log.go:172] (0xc00079ee70) Reply frame received for 3\nI1223 03:00:56.523450    3356 log.go:172] (0xc00079ee70) (0xc000840000) Create stream\nI1223 03:00:56.523491    3356 log.go:172] (0xc00079ee70) (0xc000840000) Stream added, broadcasting: 5\nI1223 03:00:56.524513    3356 log.go:172] (0xc00079ee70) Reply frame received for 5\nI1223 03:00:56.599179    3356 log.go:172] (0xc00079ee70) Data frame received for 5\nI1223 03:00:56.599218    3356 log.go:172] (0xc000840000) (5) Data frame handling\nI1223 03:00:56.599234    3356 log.go:172] (0xc000840000) (5) Data frame sent\nI1223 03:00:56.599247    3356 log.go:172] (0xc00079ee70) Data frame received for 5\nI1223 03:00:56.599260    3356 log.go:172] (0xc000840000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1223 03:00:56.599287    3356 log.go:172] (0xc00079ee70) Data frame received for 3\nI1223 03:00:56.599308    3356 log.go:172] (0xc0007190e0) (3) Data frame handling\nI1223 03:00:56.599332    3356 log.go:172] (0xc0007190e0) (3) Data frame sent\nI1223 03:00:56.599346    3356 log.go:172] (0xc00079ee70) Data frame received for 3\nI1223 03:00:56.599356    3356 log.go:172] (0xc0007190e0) (3) Data frame handling\nI1223 03:00:56.601286    3356 log.go:172] (0xc00079ee70) Data frame received for 1\nI1223 03:00:56.601326    3356 log.go:172] (0xc0008a3ea0) (1) Data frame handling\nI1223 03:00:56.601354    3356 log.go:172] (0xc0008a3ea0) (1) Data frame sent\nI1223 03:00:56.601378    3356 log.go:172] (0xc00079ee70) (0xc0008a3ea0) Stream removed, broadcasting: 1\nI1223 03:00:56.601398    3356 log.go:172] (0xc00079ee70) Go away received\nI1223 03:00:56.601670    3356 log.go:172] (0xc00079ee70) (0xc0008a3ea0) Stream removed, broadcasting: 1\nI1223 03:00:56.601685    3356 log.go:172] (0xc00079ee70) (0xc0007190e0) Stream removed, broadcasting: 3\nI1223 03:00:56.601691    3356 log.go:172] (0xc00079ee70) (0xc000840000) Stream removed, broadcasting: 5\n"
Dec 23 03:00:56.609: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 23 03:00:56.609: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 23 03:00:56.611: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 03:00:56.611: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 23 03:00:56.611: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 23 03:00:56.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 03:00:56.857: INFO: stderr: "I1223 03:00:56.763270    3376 log.go:172] (0xc000577080) (0xc0008b8000) Create stream\nI1223 03:00:56.763332    3376 log.go:172] (0xc000577080) (0xc0008b8000) Stream added, broadcasting: 1\nI1223 03:00:56.772037    3376 log.go:172] (0xc000577080) Reply frame received for 1\nI1223 03:00:56.772086    3376 log.go:172] (0xc000577080) (0xc000a08000) Create stream\nI1223 03:00:56.772103    3376 log.go:172] (0xc000577080) (0xc000a08000) Stream added, broadcasting: 3\nI1223 03:00:56.783501    3376 log.go:172] (0xc000577080) Reply frame received for 3\nI1223 03:00:56.783542    3376 log.go:172] (0xc000577080) (0xc0008b80a0) Create stream\nI1223 03:00:56.783555    3376 log.go:172] (0xc000577080) (0xc0008b80a0) Stream added, broadcasting: 5\nI1223 03:00:56.788051    3376 log.go:172] (0xc000577080) Reply frame received for 5\nI1223 03:00:56.849937    3376 log.go:172] (0xc000577080) Data frame received for 5\nI1223 03:00:56.849976    3376 log.go:172] (0xc0008b80a0) (5) Data frame handling\nI1223 03:00:56.849988    3376 log.go:172] (0xc0008b80a0) (5) Data frame sent\nI1223 03:00:56.849996    3376 log.go:172] (0xc000577080) Data frame received for 5\nI1223 03:00:56.850003    3376 log.go:172] (0xc0008b80a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 03:00:56.850025    3376 log.go:172] (0xc000577080) Data frame received for 3\nI1223 03:00:56.850033    3376 log.go:172] (0xc000a08000) (3) Data frame handling\nI1223 03:00:56.850040    3376 log.go:172] (0xc000a08000) (3) Data frame sent\nI1223 03:00:56.850045    3376 log.go:172] (0xc000577080) Data frame received for 3\nI1223 03:00:56.850049    3376 log.go:172] (0xc000a08000) (3) Data frame handling\nI1223 03:00:56.851404    3376 log.go:172] (0xc000577080) Data frame received for 1\nI1223 03:00:56.851437    3376 log.go:172] (0xc0008b8000) (1) Data frame handling\nI1223 03:00:56.851455    3376 log.go:172] (0xc0008b8000) (1) Data frame sent\nI1223 03:00:56.851474    3376 log.go:172] (0xc000577080) (0xc0008b8000) Stream removed, broadcasting: 1\nI1223 03:00:56.851602    3376 log.go:172] (0xc000577080) Go away received\nI1223 03:00:56.851841    3376 log.go:172] (0xc000577080) (0xc0008b8000) Stream removed, broadcasting: 1\nI1223 03:00:56.851861    3376 log.go:172] (0xc000577080) (0xc000a08000) Stream removed, broadcasting: 3\nI1223 03:00:56.851870    3376 log.go:172] (0xc000577080) (0xc0008b80a0) Stream removed, broadcasting: 5\n"
Dec 23 03:00:56.857: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 03:00:56.857: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 03:00:56.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 03:00:57.144: INFO: stderr: "I1223 03:00:57.002065    3398 log.go:172] (0xc0009ac000) (0xc0006e1b80) Create stream\nI1223 03:00:57.002117    3398 log.go:172] (0xc0009ac000) (0xc0006e1b80) Stream added, broadcasting: 1\nI1223 03:00:57.004339    3398 log.go:172] (0xc0009ac000) Reply frame received for 1\nI1223 03:00:57.004364    3398 log.go:172] (0xc0009ac000) (0xc0006e1d60) Create stream\nI1223 03:00:57.004371    3398 log.go:172] (0xc0009ac000) (0xc0006e1d60) Stream added, broadcasting: 3\nI1223 03:00:57.005595    3398 log.go:172] (0xc0009ac000) Reply frame received for 3\nI1223 03:00:57.005638    3398 log.go:172] (0xc0009ac000) (0xc000a2c000) Create stream\nI1223 03:00:57.005649    3398 log.go:172] (0xc0009ac000) (0xc000a2c000) Stream added, broadcasting: 5\nI1223 03:00:57.006633    3398 log.go:172] (0xc0009ac000) Reply frame received for 5\nI1223 03:00:57.076543    3398 log.go:172] (0xc0009ac000) Data frame received for 5\nI1223 03:00:57.076581    3398 log.go:172] (0xc000a2c000) (5) Data frame handling\nI1223 03:00:57.076608    3398 log.go:172] (0xc000a2c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 03:00:57.132953    3398 log.go:172] (0xc0009ac000) Data frame received for 3\nI1223 03:00:57.132995    3398 log.go:172] (0xc0006e1d60) (3) Data frame handling\nI1223 03:00:57.133016    3398 log.go:172] (0xc0006e1d60) (3) Data frame sent\nI1223 03:00:57.133029    3398 log.go:172] (0xc0009ac000) Data frame received for 3\nI1223 03:00:57.133037    3398 log.go:172] (0xc0006e1d60) (3) Data frame handling\nI1223 03:00:57.133148    3398 log.go:172] (0xc0009ac000) Data frame received for 5\nI1223 03:00:57.133168    3398 log.go:172] (0xc000a2c000) (5) Data frame handling\nI1223 03:00:57.135165    3398 log.go:172] (0xc0009ac000) Data frame received for 1\nI1223 03:00:57.135206    3398 log.go:172] (0xc0006e1b80) (1) Data frame handling\nI1223 03:00:57.135231    3398 log.go:172] (0xc0006e1b80) (1) Data frame sent\nI1223 03:00:57.135258    3398 log.go:172] (0xc0009ac000) (0xc0006e1b80) Stream removed, broadcasting: 1\nI1223 03:00:57.135292    3398 log.go:172] (0xc0009ac000) Go away received\nI1223 03:00:57.135695    3398 log.go:172] (0xc0009ac000) (0xc0006e1b80) Stream removed, broadcasting: 1\nI1223 03:00:57.135715    3398 log.go:172] (0xc0009ac000) (0xc0006e1d60) Stream removed, broadcasting: 3\nI1223 03:00:57.135725    3398 log.go:172] (0xc0009ac000) (0xc000a2c000) Stream removed, broadcasting: 5\n"
Dec 23 03:00:57.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 03:00:57.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 03:00:57.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5421 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 23 03:00:57.391: INFO: stderr: "I1223 03:00:57.267337    3419 log.go:172] (0xc00091aa50) (0xc00064bd60) Create stream\nI1223 03:00:57.267387    3419 log.go:172] (0xc00091aa50) (0xc00064bd60) Stream added, broadcasting: 1\nI1223 03:00:57.269513    3419 log.go:172] (0xc00091aa50) Reply frame received for 1\nI1223 03:00:57.269561    3419 log.go:172] (0xc00091aa50) (0xc00028d4a0) Create stream\nI1223 03:00:57.269572    3419 log.go:172] (0xc00091aa50) (0xc00028d4a0) Stream added, broadcasting: 3\nI1223 03:00:57.270332    3419 log.go:172] (0xc00091aa50) Reply frame received for 3\nI1223 03:00:57.270366    3419 log.go:172] (0xc00091aa50) (0xc00064be00) Create stream\nI1223 03:00:57.270382    3419 log.go:172] (0xc00091aa50) (0xc00064be00) Stream added, broadcasting: 5\nI1223 03:00:57.271159    3419 log.go:172] (0xc00091aa50) Reply frame received for 5\nI1223 03:00:57.329003    3419 log.go:172] (0xc00091aa50) Data frame received for 5\nI1223 03:00:57.329052    3419 log.go:172] (0xc00064be00) (5) Data frame handling\nI1223 03:00:57.329091    3419 log.go:172] (0xc00064be00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1223 03:00:57.377691    3419 log.go:172] (0xc00091aa50) Data frame received for 3\nI1223 03:00:57.377732    3419 log.go:172] (0xc00028d4a0) (3) Data frame handling\nI1223 03:00:57.377764    3419 log.go:172] (0xc00028d4a0) (3) Data frame sent\nI1223 03:00:57.378361    3419 log.go:172] (0xc00091aa50) Data frame received for 5\nI1223 03:00:57.378391    3419 log.go:172] (0xc00064be00) (5) Data frame handling\nI1223 03:00:57.378418    3419 log.go:172] (0xc00091aa50) Data frame received for 3\nI1223 03:00:57.378429    3419 log.go:172] (0xc00028d4a0) (3) Data frame handling\nI1223 03:00:57.379777    3419 log.go:172] (0xc00091aa50) Data frame received for 1\nI1223 03:00:57.379802    3419 log.go:172] (0xc00064bd60) (1) Data frame handling\nI1223 03:00:57.379815    3419 log.go:172] (0xc00064bd60) (1) Data frame sent\nI1223 03:00:57.379829    3419 log.go:172] (0xc00091aa50) (0xc00064bd60) Stream removed, broadcasting: 1\nI1223 03:00:57.379853    3419 log.go:172] (0xc00091aa50) Go away received\nI1223 03:00:57.380293    3419 log.go:172] (0xc00091aa50) (0xc00064bd60) Stream removed, broadcasting: 1\nI1223 03:00:57.380315    3419 log.go:172] (0xc00091aa50) (0xc00028d4a0) Stream removed, broadcasting: 3\nI1223 03:00:57.380326    3419 log.go:172] (0xc00091aa50) (0xc00064be00) Stream removed, broadcasting: 5\n"
Dec 23 03:00:57.391: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 23 03:00:57.391: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 23 03:00:57.391: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 03:00:57.394: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 23 03:01:07.405: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 03:01:07.405: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 03:01:07.405: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 23 03:01:07.417: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Dec 23 03:01:07.417: INFO: ss-0  jerma-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  }]
Dec 23 03:01:07.417: INFO: ss-1  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:07.417: INFO: ss-2  jerma-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:07.417: INFO: 
Dec 23 03:01:07.417: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 23 03:01:08.422: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Dec 23 03:01:08.422: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  }]
Dec 23 03:01:08.422: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:08.422: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:08.422: INFO: 
Dec 23 03:01:08.422: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 23 03:01:09.427: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Dec 23 03:01:09.427: INFO: ss-0  jerma-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:25 +0000 UTC  }]
Dec 23 03:01:09.428: INFO: ss-1  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:09.428: INFO: ss-2  jerma-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:09.428: INFO: 
Dec 23 03:01:09.428: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 23 03:01:10.432: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Dec 23 03:01:10.432: INFO: ss-1  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:10.432: INFO: ss-2  jerma-worker  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-12-23 03:00:45 +0000 UTC  }]
Dec 23 03:01:10.432: INFO: 
Dec 23 03:01:10.432: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 23 03:01:11.436: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.97803827s
Dec 23 03:01:12.440: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.974288235s
Dec 23 03:01:13.444: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.97057143s
Dec 23 03:01:14.447: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.966539493s
Dec 23 03:01:15.451: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.962758633s
Dec 23 03:01:16.455: INFO: Verifying statefulset ss doesn't scale past 0 for another 958.788105ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5421
Dec 23 03:01:17.459: INFO: Scaling statefulset ss to 0
Dec 23 03:01:17.467: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 03:01:17.469: INFO: Deleting all statefulset in ns statefulset-5421
Dec 23 03:01:17.471: INFO: Scaling statefulset ss to 0
Dec 23 03:01:17.478: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 03:01:17.480: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:17.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5421" for this suite.

• [SLOW TEST:52.484 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":257,"skipped":4208,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:17.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 23 03:01:17.553: INFO: Waiting up to 5m0s for pod "pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed" in namespace "emptydir-820" to be "success or failure"
Dec 23 03:01:17.557: INFO: Pod "pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed": Phase="Pending", Reason="", readiness=false. Elapsed: 3.860529ms
Dec 23 03:01:19.603: INFO: Pod "pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049792073s
Dec 23 03:01:21.607: INFO: Pod "pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054030521s
STEP: Saw pod success
Dec 23 03:01:21.607: INFO: Pod "pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed" satisfied condition "success or failure"
Dec 23 03:01:21.610: INFO: Trying to get logs from node jerma-worker pod pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed container test-container: 
STEP: delete the pod
Dec 23 03:01:21.655: INFO: Waiting for pod pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed to disappear
Dec 23 03:01:21.658: INFO: Pod pod-278a6fd2-74ed-4023-8ed7-b1eb125e59ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:21.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-820" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4246,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:21.666: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587
[It] should support rolling-update to same image [Deprecated] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 03:01:21.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6088'
Dec 23 03:01:21.832: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 23 03:01:21.832: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Dec 23 03:01:21.843: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Dec 23 03:01:21.862: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 23 03:01:21.889: INFO: scanned /root for discovery docs: 
Dec 23 03:01:21.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6088'
Dec 23 03:01:37.856: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 23 03:01:37.856: INFO: stdout: "Created e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a\nScaling up e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Dec 23 03:01:37.856: INFO: stdout: "Created e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a\nScaling up e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Dec 23 03:01:37.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6088'
Dec 23 03:01:37.974: INFO: stderr: ""
Dec 23 03:01:37.974: INFO: stdout: "e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a-98kzr "
Dec 23 03:01:37.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a-98kzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6088'
Dec 23 03:01:38.067: INFO: stderr: ""
Dec 23 03:01:38.067: INFO: stdout: "true"
Dec 23 03:01:38.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a-98kzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6088'
Dec 23 03:01:38.164: INFO: stderr: ""
Dec 23 03:01:38.165: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Dec 23 03:01:38.165: INFO: e2e-test-httpd-rc-6cf873d64fb2b7529772c490e4e9505a-98kzr is verified up and running
[AfterEach] Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1593
Dec 23 03:01:38.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6088'
Dec 23 03:01:38.310: INFO: stderr: ""
Dec 23 03:01:38.310: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:38.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6088" for this suite.

• [SLOW TEST:16.653 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582
    should support rolling-update to same image [Deprecated] [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Deprecated] [Conformance]","total":278,"completed":259,"skipped":4254,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:38.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name s-test-opt-del-d87d7ca7-0cef-4dc1-bbcc-cd73048ed85a
STEP: Creating secret with name s-test-opt-upd-30d3efb5-6be8-40cc-b527-44092d175c7c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-d87d7ca7-0cef-4dc1-bbcc-cd73048ed85a
STEP: Updating secret s-test-opt-upd-30d3efb5-6be8-40cc-b527-44092d175c7c
STEP: Creating secret with name s-test-opt-create-579797c1-c9f3-471f-b678-010a3f27eb5a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:46.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1948" for this suite.

• [SLOW TEST:8.207 seconds]
[sig-storage] Projected secret
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4255,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:46.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating configMap with name configmap-test-upd-37f5da23-f1f4-46ba-8a84-05f7e88c7598
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8713" for this suite.

• [SLOW TEST:6.177 seconds]
[sig-storage] ConfigMap
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4267,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:52.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating secret with name secret-test-105529b5-d6d1-4e50-a1de-f48f167f9cdd
STEP: Creating a pod to test consume secrets
Dec 23 03:01:52.805: INFO: Waiting up to 5m0s for pod "pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c" in namespace "secrets-9295" to be "success or failure"
Dec 23 03:01:52.824: INFO: Pod "pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.233395ms
Dec 23 03:01:54.828: INFO: Pod "pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022709759s
Dec 23 03:01:56.831: INFO: Pod "pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026460764s
STEP: Saw pod success
Dec 23 03:01:56.831: INFO: Pod "pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c" satisfied condition "success or failure"
Dec 23 03:01:56.835: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c container secret-volume-test: 
STEP: delete the pod
Dec 23 03:01:56.865: INFO: Waiting for pod pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c to disappear
Dec 23 03:01:56.935: INFO: Pod pod-secrets-dd979afd-7bc2-41e8-ab62-a1f93c4f973c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:01:56.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9295" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4284,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:01:56.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating projection with secret that has name projected-secret-test-deb8e352-56e0-42a7-bf4a-3f02b2ef6f72
STEP: Creating a pod to test consume secrets
Dec 23 03:01:57.105: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af" in namespace "projected-3340" to be "success or failure"
Dec 23 03:01:57.109: INFO: Pod "pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.329494ms
Dec 23 03:01:59.124: INFO: Pod "pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01895331s
Dec 23 03:02:01.131: INFO: Pod "pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025541324s
STEP: Saw pod success
Dec 23 03:02:01.131: INFO: Pod "pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af" satisfied condition "success or failure"
Dec 23 03:02:01.134: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af container projected-secret-volume-test: 
STEP: delete the pod
Dec 23 03:02:01.265: INFO: Waiting for pod pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af to disappear
Dec 23 03:02:01.412: INFO: Pod pod-projected-secrets-f1ecde28-5b56-475d-be05-11d16cb232af no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:01.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3340" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4341,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:01.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 03:02:01.465: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 23 03:02:03.519: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:04.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6399" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":264,"skipped":4381,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:04.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 03:02:05.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b" in namespace "downward-api-6557" to be "success or failure"
Dec 23 03:02:05.531: INFO: Pod "downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.602229ms
Dec 23 03:02:07.535: INFO: Pod "downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018232981s
Dec 23 03:02:09.539: INFO: Pod "downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022467445s
STEP: Saw pod success
Dec 23 03:02:09.539: INFO: Pod "downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b" satisfied condition "success or failure"
Dec 23 03:02:09.542: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b container client-container: 
STEP: delete the pod
Dec 23 03:02:09.602: INFO: Waiting for pod downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b to disappear
Dec 23 03:02:09.606: INFO: Pod downwardapi-volume-45289ab4-a9ea-4363-b861-bc487baee86b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:09.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6557" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4394,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:09.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 03:02:09.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6" in namespace "projected-8399" to be "success or failure"
Dec 23 03:02:09.679: INFO: Pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.574946ms
Dec 23 03:02:11.691: INFO: Pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029386186s
Dec 23 03:02:13.695: INFO: Pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6": Phase="Running", Reason="", readiness=true. Elapsed: 4.033507376s
Dec 23 03:02:15.699: INFO: Pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037445067s
STEP: Saw pod success
Dec 23 03:02:15.699: INFO: Pod "downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6" satisfied condition "success or failure"
Dec 23 03:02:15.701: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6 container client-container: 
STEP: delete the pod
Dec 23 03:02:15.733: INFO: Waiting for pod downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6 to disappear
Dec 23 03:02:15.739: INFO: Pod downwardapi-volume-e733f73f-ef77-41c8-9209-56f802e586b6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:15.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8399" for this suite.

• [SLOW TEST:6.135 seconds]
[sig-storage] Projected downwardAPI
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4397,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:15.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support --unix-socket=/path  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Starting the proxy
Dec 23 03:02:15.781: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix168044035/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9794" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":267,"skipped":4401,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:15.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should support proxy with --port 0  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: starting the proxy server
Dec 23 03:02:15.947: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:16.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9977" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":268,"skipped":4417,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:16.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 03:02:16.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 23 03:02:19.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1558 create -f -'
Dec 23 03:02:22.297: INFO: stderr: ""
Dec 23 03:02:22.297: INFO: stdout: "e2e-test-crd-publish-openapi-3882-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Dec 23 03:02:22.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1558 delete e2e-test-crd-publish-openapi-3882-crds test-cr'
Dec 23 03:02:22.396: INFO: stderr: ""
Dec 23 03:02:22.396: INFO: stdout: "e2e-test-crd-publish-openapi-3882-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Dec 23 03:02:22.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1558 apply -f -'
Dec 23 03:02:22.659: INFO: stderr: ""
Dec 23 03:02:22.659: INFO: stdout: "e2e-test-crd-publish-openapi-3882-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Dec 23 03:02:22.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1558 delete e2e-test-crd-publish-openapi-3882-crds test-cr'
Dec 23 03:02:22.778: INFO: stderr: ""
Dec 23 03:02:22.778: INFO: stdout: "e2e-test-crd-publish-openapi-3882-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Dec 23 03:02:22.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3882-crds'
Dec 23 03:02:23.022: INFO: stderr: ""
Dec 23 03:02:23.022: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3882-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:24.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1558" for this suite.

• [SLOW TEST:8.901 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":269,"skipped":4418,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:24.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
Dec 23 03:02:25.036: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:26.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8991" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":270,"skipped":4448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:26.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a working application  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: creating all guestbook components
Dec 23 03:02:26.160: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Dec 23 03:02:26.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:26.527: INFO: stderr: ""
Dec 23 03:02:26.527: INFO: stdout: "service/agnhost-slave created\n"
Dec 23 03:02:26.528: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Dec 23 03:02:26.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:26.811: INFO: stderr: ""
Dec 23 03:02:26.811: INFO: stdout: "service/agnhost-master created\n"
Dec 23 03:02:26.812: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 23 03:02:26.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:27.075: INFO: stderr: ""
Dec 23 03:02:27.076: INFO: stdout: "service/frontend created\n"
Dec 23 03:02:27.076: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Dec 23 03:02:27.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:27.346: INFO: stderr: ""
Dec 23 03:02:27.346: INFO: stdout: "deployment.apps/frontend created\n"
Dec 23 03:02:27.346: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 23 03:02:27.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:27.656: INFO: stderr: ""
Dec 23 03:02:27.656: INFO: stdout: "deployment.apps/agnhost-master created\n"
Dec 23 03:02:27.656: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 23 03:02:27.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-427'
Dec 23 03:02:27.885: INFO: stderr: ""
Dec 23 03:02:27.885: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Dec 23 03:02:27.885: INFO: Waiting for all frontend pods to be Running.
Dec 23 03:02:37.936: INFO: Waiting for frontend to serve content.
Dec 23 03:02:37.947: INFO: Trying to add a new entry to the guestbook.
Dec 23 03:02:37.960: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 23 03:02:37.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.121: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.121: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 03:02:38.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.268: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.268: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 03:02:38.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.390: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.390: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 03:02:38.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.499: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.499: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 03:02:38.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.613: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.613: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 23 03:02:38.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-427'
Dec 23 03:02:38.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 23 03:02:38.722: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:38.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-427" for this suite.

• [SLOW TEST:12.644 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:381
    should create and stop a working application  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":271,"skipped":4470,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:38.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 23 03:02:38.797: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 03:02:38.842: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 03:02:38.847: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Dec 23 03:02:38.853: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 03:02:38.853: INFO: frontend-6c5f89d5d4-rv946 from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 23 03:02:38.853: INFO: agnhost-slave-774cfc759f-2kn7w from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container slave ready: true, restart count 0
Dec 23 03:02:38.853: INFO: chaos-controller-manager-7f9bbd476f-jm8nf from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container chaos-mesh ready: true, restart count 0
Dec 23 03:02:38.853: INFO: frontend-6c5f89d5d4-l5htl from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 23 03:02:38.853: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 03:02:38.853: INFO: chaos-daemon-r2kj7 from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.853: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 03:02:38.853: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Dec 23 03:02:38.929: INFO: agnhost-master-74c46fb7d4-bkn7v from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container master ready: true, restart count 0
Dec 23 03:02:38.929: INFO: chaos-daemon-mzgg5 from default started at 2020-11-22 21:56:28 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 03:02:38.929: INFO: frontend-6c5f89d5d4-lx8l9 from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container guestbook-frontend ready: true, restart count 0
Dec 23 03:02:38.929: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 03:02:38.929: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 03:02:38.929: INFO: agnhost-slave-774cfc759f-k285s from kubectl-427 started at 2020-12-23 03:02:27 +0000 UTC (1 container statuses recorded)
Dec 23 03:02:38.929: INFO: 	Container slave ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1653395bf2f0559d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7235" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":272,"skipped":4474,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:39.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test downward API volume plugin
Dec 23 03:02:40.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79" in namespace "projected-7529" to be "success or failure"
Dec 23 03:02:40.422: INFO: Pod "downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79": Phase="Pending", Reason="", readiness=false. Elapsed: 36.520621ms
Dec 23 03:02:42.449: INFO: Pod "downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06374338s
Dec 23 03:02:44.463: INFO: Pod "downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077549457s
STEP: Saw pod success
Dec 23 03:02:44.463: INFO: Pod "downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79" satisfied condition "success or failure"
Dec 23 03:02:44.483: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79 container client-container: 
STEP: delete the pod
Dec 23 03:02:44.549: INFO: Waiting for pod downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79 to disappear
Dec 23 03:02:44.555: INFO: Pod downwardapi-volume-f52bf79e-7c65-4fbc-9dcc-7102c2f20d79 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:44.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7529" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4488,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:44.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[BeforeEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1760
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 23 03:02:44.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6984'
Dec 23 03:02:44.742: INFO: stderr: ""
Dec 23 03:02:44.742: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1765
Dec 23 03:02:44.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6984'
Dec 23 03:02:54.354: INFO: stderr: ""
Dec 23 03:02:54.354: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:54.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6984" for this suite.

• [SLOW TEST:9.762 seconds]
[sig-cli] Kubectl client
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1756
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":274,"skipped":4490,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:54.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 23 03:02:54.448: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 23 03:02:59.477: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:02:59.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3183" for this suite.

• [SLOW TEST:5.268 seconds]
[sig-apps] ReplicationController
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":275,"skipped":4501,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:02:59.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Creating a pod to test substitution in container's args
Dec 23 03:02:59.761: INFO: Waiting up to 5m0s for pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf" in namespace "var-expansion-9427" to be "success or failure"
Dec 23 03:02:59.797: INFO: Pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf": Phase="Pending", Reason="", readiness=false. Elapsed: 35.769169ms
Dec 23 03:03:01.801: INFO: Pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039947128s
Dec 23 03:03:03.804: INFO: Pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf": Phase="Running", Reason="", readiness=true. Elapsed: 4.043037029s
Dec 23 03:03:05.988: INFO: Pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.227208743s
STEP: Saw pod success
Dec 23 03:03:05.988: INFO: Pod "var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf" satisfied condition "success or failure"
Dec 23 03:03:05.991: INFO: Trying to get logs from node jerma-worker pod var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf container dapi-container: 
STEP: delete the pod
Dec 23 03:03:06.061: INFO: Waiting for pod var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf to disappear
Dec 23 03:03:06.070: INFO: Pod var-expansion-3b56116e-54fc-43cf-bcf3-aa7380804faf no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:03:06.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9427" for this suite.

• [SLOW TEST:6.447 seconds]
[k8s.io] Variable Expansion
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4513,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:03:06.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86
Dec 23 03:03:06.432: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 23 03:03:06.458: INFO: Waiting for terminating namespaces to be deleted...
Dec 23 03:03:06.461: INFO: 
Logging pods the kubelet thinks is on node jerma-worker before test
Dec 23 03:03:06.466: INFO: chaos-daemon-r2kj7 from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.466: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 03:03:06.466: INFO: kube-proxy-knc9b from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.466: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 23 03:03:06.466: INFO: chaos-controller-manager-7f9bbd476f-jm8nf from default started at 2020-11-22 21:56:29 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.466: INFO: 	Container chaos-mesh ready: true, restart count 0
Dec 23 03:03:06.466: INFO: kindnet-nlsvd from kube-system started at 2020-09-23 08:27:39 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.466: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 03:03:06.466: INFO: 
Logging pods the kubelet thinks is on node jerma-worker2 before test
Dec 23 03:03:06.471: INFO: chaos-daemon-mzgg5 from default started at 2020-11-22 21:56:28 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.471: INFO: 	Container chaos-daemon ready: true, restart count 0
Dec 23 03:03:06.471: INFO: kindnet-5wksn from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.471: INFO: 	Container kindnet-cni ready: true, restart count 0
Dec 23 03:03:06.471: INFO: kube-proxy-jgndm from kube-system started at 2020-09-23 08:27:38 +0000 UTC (1 container statuses recorded)
Dec 23 03:03:06.471: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: verifying the node has the label node jerma-worker
STEP: verifying the node has the label node jerma-worker2
Dec 23 03:03:06.583: INFO: Pod chaos-controller-manager-7f9bbd476f-jm8nf requesting resource cpu=25m on Node jerma-worker
Dec 23 03:03:06.583: INFO: Pod chaos-daemon-mzgg5 requesting resource cpu=0m on Node jerma-worker2
Dec 23 03:03:06.583: INFO: Pod chaos-daemon-r2kj7 requesting resource cpu=0m on Node jerma-worker
Dec 23 03:03:06.583: INFO: Pod kindnet-5wksn requesting resource cpu=100m on Node jerma-worker2
Dec 23 03:03:06.583: INFO: Pod kindnet-nlsvd requesting resource cpu=100m on Node jerma-worker
Dec 23 03:03:06.583: INFO: Pod kube-proxy-jgndm requesting resource cpu=0m on Node jerma-worker2
Dec 23 03:03:06.583: INFO: Pod kube-proxy-knc9b requesting resource cpu=0m on Node jerma-worker
STEP: Starting Pods to consume most of the cluster CPU.
Dec 23 03:03:06.583: INFO: Creating a pod which consumes cpu=11112m on Node jerma-worker
Dec 23 03:03:06.589: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0.1653396260f9dcf9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6767/filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0 to jerma-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0.16533962ac6ae2a2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0.1653396324b9b2c0], Reason = [Created], Message = [Created container filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0.16533963376352ee], Reason = [Started], Message = [Started container filler-pod-80dcc340-710e-4d51-8481-354c8b04edd0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1.1653396260f868d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6767/filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1 to jerma-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1.16533962f6c54715], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1.165339633bb018a8], Reason = [Created], Message = [Created container filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1.165339634d38c046], Reason = [Started], Message = [Started container filler-pod-aaedb69f-b1e5-48cd-965d-ee878ff24fd1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16533963cb1e055f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.16533963cd6f5acc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:03:13.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6767" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77

• [SLOW TEST:7.679 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":277,"skipped":4552,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 23 03:03:13.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1470
[It] Should recreate evicted statefulset [Conformance]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1470
STEP: Creating statefulset with conflicting port in namespace statefulset-1470
STEP: Waiting until pod test-pod will start running in namespace statefulset-1470
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1470
Dec 23 03:03:17.914: INFO: Observed stateful pod in namespace: statefulset-1470, name: ss-0, uid: c5440f46-1f02-49f6-806d-ae4a5c87e3b0, status phase: Pending. Waiting for statefulset controller to delete.
Dec 23 03:03:18.294: INFO: Observed stateful pod in namespace: statefulset-1470, name: ss-0, uid: c5440f46-1f02-49f6-806d-ae4a5c87e3b0, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 03:03:18.300: INFO: Observed stateful pod in namespace: statefulset-1470, name: ss-0, uid: c5440f46-1f02-49f6-806d-ae4a5c87e3b0, status phase: Failed. Waiting for statefulset controller to delete.
Dec 23 03:03:18.306: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1470
STEP: Removing pod with conflicting port in namespace statefulset-1470
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1470 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Dec 23 03:03:22.402: INFO: Deleting all statefulset in ns statefulset-1470
Dec 23 03:03:22.405: INFO: Scaling statefulset ss to 0
Dec 23 03:03:32.426: INFO: Waiting for statefulset status.replicas updated to 0
Dec 23 03:03:32.428: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 23 03:03:32.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1470" for this suite.

• [SLOW TEST:18.693 seconds]
[sig-apps] StatefulSet
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716
    Should recreate evicted statefulset [Conformance]
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":278,"skipped":4560,"failed":0}
SSSSSSSSDec 23 03:03:32.450: INFO: Running AfterSuite actions on all nodes
Dec 23 03:03:32.450: INFO: Running AfterSuite actions on node 1
Dec 23 03:03:32.450: INFO: Skipping dumping logs from cluster
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4568,"failed":0}

Ran 278 of 4846 Specs in 4367.442 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4568 Skipped
PASS